text
stringlengths 2.5k
6.39M
| kind
stringclasses 3
values |
---|---|
# Monte Carlo Methods
In this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms.
While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.
### Part 0: Explore BlackjackEnv
We begin by importing the necessary packages.
```
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
import sys
print(sys.version)
```
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
```
env = gym.make('Blackjack-v0')
```
Each state is a 3-tuple of:
- the player's current sum $\in \{0, 1, \ldots, 31\}$,
- the dealer's face up card $\in \{1, \ldots, 10\}$, and
- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).
The agent has two potential actions:
```
STICK = 0
HIT = 1
```
Verify this by running the code cell below.
```
print(env.observation_space)
print(env.action_space)
print(env.action_space)
```
Execute the code cell below to play Blackjack with a random policy.
(_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
```
for i_episode in range(3):
state = env.reset()
while True:
action = env.action_space.sample()
print(state, action)
state, reward, done, info = env.step(action)
print(state)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
```
### Part 1: MC Prediction
In this section, you will write your own implementation of MC prediction (for estimating the action-value function).
We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy.
The function accepts as **input**:
- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.
It returns as **output**:
- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
```
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
```
Execute the code cell below to play Blackjack with the policy.
(*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
```
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
```
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.
Your algorithm has three arguments:
- `env`: This is an instance of an OpenAI Gym environment.
- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.
- `generate_episode`: This is a function that returns an episode of interaction.
- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).
The algorithm returns as output:
- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
```
N = defaultdict(lambda: np.zeros(env.action_space.n))
print(N)
import time
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
start=time.time()
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# generate an episode
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
#discounts = np.array([gamma**i for i in range(len(episode))])
reward = episode[-1][-1]
#print(episode, len(episode), reward)
for i, state in enumerate(states):
action = actions[i]
g = gamma**(len(episode)-1-i)*reward
#g = sum(discounts[:len(states)-i]*rewards[i:])
returns_sum[state][action] += g
N[state][action]+= 1
Q[state][action]= returns_sum[state][action]/N[state][action]
print("elapsed:", time.time()-start)
return Q
Q = mc_prediction_q(env, 1, generate_episode_from_limit_stochastic)
```
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.
To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
```
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
```
### Part 2: MC Control
In this section, you will write your own implementation of constant-$\alpha$ MC control.
Your algorithm has four arguments:
- `env`: This is an instance of an OpenAI Gym environment.
- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.
- `alpha`: This is the step-size parameter for the update step.
- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).
The algorithm returns as output:
- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.
(_Feel free to define additional functions to help you to organize your code._)
```
def get_prob(Q_state, epsilon):
probs = epsilon*np.ones_like(Q_state)/len(Q_state)
probs[np.argmax(probs)]+=1-epsilon
return probs
#get_prob([40, 2], 0.1)
def generate_episode_epsilon_greedy(env, Q, epsilon):
episode = []
state = env.reset()
nA = env.action_space.n
while True:
# get probability
if state in Q:
probs = get_prob(Q[state], epsilon)
else:
probs = np.ones_like(Q[state])/nA
action = np.random.choice(np.arange(nA), p=probs)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(env, Q, episode, gamma, alpha):
states, actions, rewards = zip(*episode)
#discounts = np.array([gamma**i for i in range(len(episode))])
reward = episode[-1][-1]
#print(episode, len(episode), reward)
for i, state in enumerate(states):
action = actions[i]
g = gamma**(len(episode)-1-i)*reward
#g = sum(discounts[:len(states)-i]*rewards[i:])
Q[state][action] += alpha*(g-Q[state][action])
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon_start=1.0, epsilon_decay=0.999999, epsilon_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
epsilon=epsilon_start
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{} Epsilon {}.".format(i_episode, num_episodes, epsilon), end="")
sys.stdout.flush()
## TODO: complete the function
# generate episode using epsilon-greedy
epsilon = max(epsilon*epsilon_decay, epsilon_min)
episode = generate_episode_epsilon_greedy(env, Q, epsilon)
# update Q using constant alpha
Q = update_Q(env, Q, episode, gamma, alpha)
policy = dict((k,np.argmax(v)) for k,v in Q.items())
return policy, Q
```
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
```
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 1000000, 0.1)
```
Next, we plot the corresponding state-value function.
```
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
```
Finally, we visualize the policy that is estimated to be optimal.
```
# plot the policy
plot_policy(policy)
```
The **true** optimal policy $\pi_*$ can be found in Figure 5.2 of the [textbook](http://go.udacity.com/rl-textbook) (and appears below). Compare your final estimate to the optimal policy - how close are you able to get? If you are not happy with the performance of your algorithm, take the time to tweak the decay rate of $\epsilon$, change the value of $\alpha$, and/or run the algorithm for more episodes to attain better results.

```
for k, v in policy.items():
if k[2]:
print(k,v)
```
|
github_jupyter
|
# Narowcast Server service migration to Distribution Services
## 1. Getting data from NC
### 1.1 List of NC Services
```
# Run this SQL code against Narrocast Server database
"""
select
names1.MR_OBJECT_ID AS serviceID,
names1.MR_OBJECT_NAME AS service_name,
parent1.MR_OBJECT_NAME AS foldername,
names2.MR_OBJECT_NAME AS publication_name,
names3.MR_OBJECT_NAME AS document_name,
info3.MR_OBJECT_SUBTYPE AS doc_type,
names4.MR_OBJECT_ID AS info_obj_id,
names4.MR_OBJECT_NAME AS info_obj_name,
info4.MR_OBJECT_SUBTYPE AS info_obj_subtype
from
MSTROBJNAMES names1,
MSTROBJINFO info1,
MSTROBJNAMES parent1,
MSTROBJDEPN dpns,
MSTROBJNames names2,
MSTROBJDEPN dpns2,
MSTROBJNames names3,
MSTROBJINFO info3,
MSTROBJDEPN dpns3,
MSTROBJNames names4,
MSTROBJInfo info4
where names1.MR_OBJECT_ID = dpns.MR_INDEP_OBJID
and names1.MR_OBJECT_ID = info1.MR_OBJECT_ID
and info1.MR_PARENT_ID = parent1.MR_OBJECT_ID
and dpns.MR_DEPN_OBJID = names2.MR_OBJECT_ID
and names2.MR_OBJECT_ID = dpns2.MR_INDEP_OBJID
and dpns2.MR_DEPN_OBJID = names3.MR_OBJECT_ID
and names3.MR_OBJECT_ID = dpns3.MR_INDEP_OBJID
and names3.MR_OBJECT_ID = info3.MR_OBJECT_ID
and dpns3.MR_DEPN_OBJID = names4.MR_OBJECT_ID
and dpns3.MR_DEPN_OBJID = info4.MR_OBJECT_ID
and names1.MR_Object_Type = 19
and names2.MR_Object_Type = 16
and names3.MR_Object_Type = 14
and names4.MR_Object_Type = 4
and info4.MR_OBJECT_SubType <> 1
"""
```
<img src="Images/NC_services.png">
### 1.2 NC Service details
```
"""
select
names1.MR_OBJECT_ID AS serviceID, --This is Service ID
names1.MR_OBJECT_NAME AS service_name,
names2.MR_OBJECT_NAME AS subset_name,
a11.MR_ADD_DISPLAY AS dispname,
a11.MR_PHYSICAL_ADD AS email,
a13.MR_USER_NAME,
sp.MR_INFOSOURCE_ID,
sp.MR_QUES_OBJ_ID,
po.mr_seq,
sp.MR_USER_PREF,
po.MR_PREF_OBJ
from
MSTROBJNames names1,
MSTROBJINFO info1,
MSTROBJDEPN dpns,
MSTROBJNames names2,
MSTRSUBSCRIPTIONS a12,
MSTRADDRESSES a11,
MSTRUSERS a13,
MSTRSUBPREF sp,
MSTRPREFOBJS po
where names1.MR_Object_Type = 19
and names2.MR_Object_Type = 17
and info1.MR_STATUS =1
and names1.MR_OBJECT_ID = info1.MR_OBJECT_ID
and names1.MR_OBJECT_ID = dpns.MR_INDEP_OBJID
and dpns.MR_DEPN_OBJID = names2.MR_OBJECT_ID
and names2.MR_OBJECT_ID = a12.MR_SUB_SET_ID
and a11.MR_ADDRESS_ID = a12.MR_ADDRESS_ID
and a12.MR_SUB_GUID = sp.MR_SUB_GUID
and sp.MR_PREF_OBJ_ID = po.MR_PREF_OBJ_ID
and a12.MR_USER_ID = a13.MR_USER_ID
and names1.MR_OBJECT_ID = '047886F8A7474F4A929EC6DD135F0A98' --Filter for Service ID
"""
```
<img src="Images/service_details.png">
```
with open('narrowcast_emails.csv', encoding="utf8", newline='') as f:
email_list = [x.strip() for x in f]
```
## Automate tasks in MicroStrategy
```
from mstrio.connection import Connection
from mstrio.distribution_services import EmailSubscription, Content
from mstrio.users_and_groups.user import list_users
from datetime import datetime
#### Parameters ####
api_login, api_password = 'administrator', ''
base_url = 'Insert Env URL'
project_id = 'Insert Project ID'
conn = Connection(base_url,api_login,api_password)
```
### Get users' default addresses
```
users = list_users(connection=conn)
default_addresses=[]
for u in users:
if u.addresses:
user_addresses = [[u.name, u.id, uad['value']] for uad in u.addresses if uad['isDefault']==True]
default_addresses.extend(user_addresses)
```
### Create a list of recipients
```
# From MSTR Metadata
for d in default_addresses:
print(d)
# From Narrowcast
for e in email_list:
print(e)
# Match Metadata with Narrowcast
matched_emails = [d[1] for d in default_addresses if d[2] in email_list]
for m in matched_emails:
print(m)
```
### Create a subscription
```
# create an email subscription
recipient_ids = matched_emails[:]
content_id = 'Insert Content ID'
schedule_id = 'Insert Schedule ID'
subscription_name = 'REST_API_'+datetime.now().strftime("%Y-%m-%d__%H-%M")
subject_txt='Email Subject'
message_txt="Message Text"
EmailSubscription.create(connection=conn,
name=subscription_name,
project_id=project_id,
send_now = True,
contents=[Content(id=content_id, type='report', name='Report 1',
personalization=Content.Properties(format_type='EXCEL'))],
schedules_ids=[schedule_id],
recipients=recipient_ids,
email_subject=subject_txt,
email_message=message_txt,
email_send_content_as="data")
```
|
github_jupyter
|
# High-level Keras (Theano) Example
```
# Lots of warnings!
# Not sure why Keras creates model with float64?
%%writefile ~/.theanorc
[global]
device = cuda0
force_device= True
floatX = float32
warn_float64 = warn
import os
import sys
import numpy as np
os.environ['KERAS_BACKEND'] = "theano"
import theano
import keras as K
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from common.params import *
from common.utils import *
# Force one-gpu
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
# Performance Improvement
# 1. Make sure channels-first (not last)
K.backend.set_image_data_format('channels_first')
# 2. CuDNN auto-tune
theano.config.dnn.conv.algo_fwd = "time_once"
theano.config.dnn.conv.algo_bwd_filter = "time_once"
theano.config.dnn.conv.algo_bwd_data = "time_once"
print("OS: ", sys.platform)
print("Python: ", sys.version)
print("Keras: ", K.__version__)
print("Numpy: ", np.__version__)
print("Theano: ", theano.__version__)
print(K.backend.backend())
print(K.backend.image_data_format())
print("GPU: ", get_gpu_name())
print(get_cuda_version())
print("CuDNN Version ", get_cudnn_version())
def create_symbol(n_classes=N_CLASSES):
model = Sequential()
model.add(Conv2D(50, kernel_size=(3, 3), padding='same', activation='relu',
input_shape=(3, 32, 32)))
model.add(Conv2D(50, kernel_size=(3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(100, kernel_size=(3, 3), padding='same', activation='relu'))
model.add(Conv2D(100, kernel_size=(3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(n_classes, activation='softmax'))
return model
def init_model(m, lr=LR, momentum=MOMENTUM):
m.compile(
loss = "categorical_crossentropy",
optimizer = K.optimizers.SGD(lr, momentum),
metrics = ['accuracy'])
return m
%%time
# Data into format for library
x_train, x_test, y_train, y_test = cifar_for_library(channel_first=True, one_hot=True)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)
print(x_train.dtype, x_test.dtype, y_train.dtype, y_test.dtype)
%%time
# Load symbol
sym = create_symbol()
%%time
# Initialise model
model = init_model(sym)
model.summary()
%%time
# Main training loop: 1m33s
model.fit(x_train,
y_train,
batch_size=BATCHSIZE,
epochs=EPOCHS,
verbose=1)
%%time
# Main evaluation loop: 2.47s
y_guess = model.predict(x_test, batch_size=BATCHSIZE)
y_guess = np.argmax(y_guess, axis=-1)
y_truth = np.argmax(y_test, axis=-1)
print("Accuracy: ", 1.*sum(y_guess == y_truth)/len(y_guess))
```
|
github_jupyter
|
### 6. Python API Training - Continuous Model Training [Solution]
<b>Author:</b> Thodoris Petropoulos <br>
<b>Contributors:</b> Rajiv Shah
This is the 6th exercise to complete in order to finish your `Python API Training for DataRobot` course! This exercise teaches you how to deploy a trained model, make predictions (**Warning**: Multiple ways of getting predictions out of DataRobot), and monitor drift to replace a model.
Here are the actual sections of the notebook alongside time to complete:
1. Connect to DataRobot. [3min]<br>
2. Retrieve the first project created in `Exercise 4 - Model Factory`. [5min]
3. Search for the `recommended for deployment` model and deploy it as a rest API. [20min]
4. Create a scoring procedure using dataset (1) that will force data drift on that deployment. [25min]
5. Check data drift. Does it look like data is drifting?. [3min]
6. Create a new project using data (2). [5min]
7. Replace the previously deployed model with the new `recommended for deployment` model from the new project. [10min]
Each section will have specific instructions so do not worry if things are still blurry!
As always, consult:
- [API Documentation](https://datarobot-public-api-client.readthedocs-hosted.com)
- [Samples](https://github.com/datarobot-community/examples-for-data-scientists)
- [Tutorials](https://github.com/datarobot-community/tutorials-for-data-scientists)
The last two links should provide you with the snippets you need to complete most of these exercises.
<b>Data</b>
(1) The dataset we will be using throughout these exercises is the well-known `readmissions dataset`. You can access it or directly download it through DataRobot's public S3 bucket [here](https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes.csv).
(2) This dataset will be used to retrain the model. It can be accessed [here](https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes_scoring.csv) through DataRobot's public S3 bucket.
### Import Libraries
Import libraries here as you start finding out what libraries are needed. The DataRobot package is already included for your convenience.
```
import datarobot as dr
#Proposed Libraries needed
import pandas as pd
```
### 1. Connect to DataRobot [3min]
```
#Possible solution
dr.Client(config_path='../../github/config.yaml')
```
### 2. Retrieve the first project created in `Exercise 4 - Model Factory` . [5min]
This should be the first project created during the exercise. Not one of the projects created using a sample of `readmission_type_id`.
```
#Proposed Solution
project = dr.Project.get('YOUR_PROJECT_ID')
```
### 3. Search for the `recommended for deployment` model and deploy it as a rest API. [10min]
**Hint**: The recommended model can be found using the `DataRobot.ModelRecommendation` method.
**Hint 2**: Use the `update_drift_tracking_settings` method on the DataRobot Deployment object to enable data drift tracking.
```
# Proposed Solution
#Find the recommended model
recommended_model = dr.ModelRecommendation.get(project.id).get_model()
#Deploy the model
prediction_server = dr.PredictionServer.list()[0]
deployment = dr.Deployment.create_from_learning_model(recommended_model.id, label='Readmissions Deployment', default_prediction_server_id=prediction_server.id)
deployment.update_drift_tracking_settings(feature_drift_enabled=True)
```
### 4. Create a scoring procedure using dataset (1) that will force data drift on that deployment. [25min]
**Instructions**
1. Take the first 100 rows of dataset (1) and save them to a Pandas DataFrame
2. Score 5 times using these observations to force drift.
3. Use the deployment you created during `question 3`.
**Hint**: The easiest way to score using a deployed model in DataRobot is to go to the `Deployments` page within DataRobot and navigate to the `Integrations` and `scoring code` tab. There you will find sample code for Python that you can use to score.
**Hint 2**: The only thing you will have to change for the code to work is change the filename variable to point to the csv file to be scored and create a for loop.
```
# Proposed Solution
#Save the dataset that is going to be scored as a csv file
scoring_dataset = pd.read_csv('https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes.csv').head(100)
scoring_dataset.to_csv('scoring_dataset.csv', index=False)
#This has been copied from the `integrations` tab.
#The only thing you actually have to do is change the filename variable in the bottom of the script and
#create the for loop.
"""
Usage:
python datarobot-predict.py <input-file.csv>
This example uses the requests library which you can install with:
pip install requests
We highly recommend that you update SSL certificates with:
pip install -U urllib3[secure] certifi
"""
import sys
import json
import requests
DATAROBOT_KEY = ''
API_KEY = ''
USERNAME = ''
DEPLOYMENT_ID = ''
MAX_PREDICTION_FILE_SIZE_BYTES = 52428800 # 50 MB
class DataRobotPredictionError(Exception):
"""Raised if there are issues getting predictions from DataRobot"""
def make_datarobot_deployment_predictions(data, deployment_id):
"""
Make predictions on data provided using DataRobot deployment_id provided.
See docs for details:
https://app.eu.datarobot.com/docs/users-guide/predictions/api/new-prediction-api.html
Parameters
----------
data : str
Feature1,Feature2
numeric_value,string
deployment_id : str
The ID of the deployment to make predictions with.
Returns
-------
Response schema:
https://app.eu.datarobot.com/docs/users-guide/predictions/api/new-prediction-api.html#response-schema
Raises
------
DataRobotPredictionError if there are issues getting predictions from DataRobot
"""
# Set HTTP headers. The charset should match the contents of the file.
headers = {'Content-Type': 'text/plain; charset=UTF-8', 'datarobot-key': DATAROBOT_KEY}
url = 'https://cfds.orm.eu.datarobot.com/predApi/v1.0/deployments/{deployment_id}/'\
'predictions'.format(deployment_id=deployment_id)
# Make API request for predictions
predictions_response = requests.post(
url,
auth=(USERNAME, API_KEY),
data=data,
headers=headers,
)
_raise_dataroboterror_for_status(predictions_response)
# Return a Python dict following the schema in the documentation
return predictions_response.json()
def _raise_dataroboterror_for_status(response):
"""Raise DataRobotPredictionError if the request fails along with the response returned"""
try:
response.raise_for_status()
except requests.exceptions.HTTPError:
err_msg = '{code} Error: {msg}'.format(
code=response.status_code, msg=response.text)
raise DataRobotPredictionError(err_msg)
def main(filename, deployment_id):
"""
Return an exit code on script completion or error. Codes > 0 are errors to the shell.
Also useful as a usage demonstration of
`make_datarobot_deployment_predictions(data, deployment_id)`
"""
if not filename:
print(
'Input file is required argument. '
'Usage: python datarobot-predict.py <input-file.csv>')
return 1
data = open(filename, 'rb').read()
data_size = sys.getsizeof(data)
if data_size >= MAX_PREDICTION_FILE_SIZE_BYTES:
print(
'Input file is too large: {} bytes. '
'Max allowed size is: {} bytes.'
).format(data_size, MAX_PREDICTION_FILE_SIZE_BYTES)
return 1
try:
predictions = make_datarobot_deployment_predictions(data, deployment_id)
except DataRobotPredictionError as exc:
print(exc)
return 1
print(json.dumps(predictions, indent=4))
return 0
for i in range(0,5):
filename = 'scoring_dataset.csv'
main(filename, DEPLOYMENT_ID)
```
### 5. Check data drift. Does it look like data is drifting?. [3min]
Check data drift from within the `Deployments` page in the UI. Is data drift marked as red?
### 6. Create a new project using data (2). [5min]
Link to data: https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes_scoring.csv
```
#Proposed solution
new_project = dr.Project.create(sourcedata = 'https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes_scoring.csv',
project_name = '06_New_Project')
new_project.set_target(target = 'readmitted', mode = 'quick', worker_count = -1)
new_project.wait_for_autopilot()
```
### 7. Replace the previously deployed model with the new `recommended for deployment` model from the new project. [10min]
**Hint**: You will have to provide a reason why you are replacing the model. Try: `dr.enums.MODEL_REPLACEMENT_REASON.DATA_DRIFT`.
```
#Proposed Solution
new_recommended_model = dr.ModelRecommendation.get(new_project.id).get_model()
deployment.replace_model(new_recommended_model.id, dr.enums.MODEL_REPLACEMENT_REASON.DATA_DRIFT)
```
|
github_jupyter
|
```
import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
```
注意点:
- b 零初始值
- w 初始化要用 tf,不要用 np
```
# 读取数据集MNIST,并放在当前目录data文件夹下MNIST文件夹中,如果该地址没有数据,则下载数据至该文件夹
# 一张图片有 28*28=784 个像素点,每个点用一个浮点数表示其亮度;
mnist = input_data.read_data_sets("./data/MNIST/", one_hot=True)
#该函数用于输出生成图片
def plot(samples):
fig = plt.figure(figsize=(4, 4))
gs = gridspec.GridSpec(4, 4)
gs.update(wspace=0.05, hspace=0.05)
for i, sample in enumerate(samples):
ax = plt.subplot(gs[i])
plt.axis('off')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_aspect('equal')
plt.imshow(sample.reshape(28, 28), cmap='Greys_r')
return fig
batch_size = 128
z_dim = 200
def variable_init(size):
# He initialization: sqrt(2./dim of the previous layer)
# np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2./layers_dims[l-1])
in_dim = size[0]
return tf.random_normal(shape=size, stddev=np.sqrt(2./in_dim))
# 定义并初始化变量
X = tf.placeholder(tf.float32, shape=(None, 784))
Z = tf.placeholder(tf.float32, shape=(None, z_dim))
DW1 = tf.Variable(variable_init([784, 128]))
Db1 = tf.Variable(tf.zeros(shape=[128]))
DW2 = tf.Variable(variable_init([128, 1]))
Db2 = tf.Variable(tf.zeros(shape=[1]))
theta_D = [DW1, DW1, Db1, Db2]
GW1 = tf.Variable(variable_init([z_dim, 128]))
Gb1 = tf.Variable(tf.zeros(shape=[128]))
GW2 = tf.Variable(variable_init([128, 784]))
Gb2 = tf.Variable(tf.zeros(shape=[784]))
theta_G = [GW1, GW2, Gb1, Gb2]
# 定义随机噪声生成器
# 函数 Z,生成 z
def noise_maker(m, n):
return np.random.uniform(-1.0, 1.0, size=[m, n])
# 定义数据生成器,将 z 变成 概率分布
# 生成的结果为:是不是图片
# 生成 N * 784 的结果
def generator(z):
# tanh, relu。。。都可以
Gh1 = tf.nn.relu(tf.matmul(z, GW1) + Gb1)
G_logit = tf.matmul(Gh1, GW2) + Gb2
# 这里用 sigmoid 是因为不需要加起来概率等于 1
G_prob = tf.nn.sigmoid(G_logit)
return G_prob
# 定义判别器
def discriminator(x):
# tanh relu。。。
Dh1 = tf.nn.relu(tf.matmul(x, DW1) + Db1)
D_logit = tf.matmul(Dh1, DW2) + Db2
# D_prob = tf.nn.sigmoid(D_logit)
return D_logit # , D_prob
# 定义损失函数
D_real_logit = discriminator(X) # D_real_prob,
D_fake_logit = discriminator(generator(Z)) # D_fake_prob,
D_X = tf.concat([D_real_logit, D_fake_logit], 1)
D_y = tf.concat([tf.ones_like(D_real_logit), tf.zeros_like(D_fake_logit)], 1)
D_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_X, labels=D_y))
G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_fake_logit, labels=tf.ones_like(D_fake_logit)))
D_opt = tf.train.AdamOptimizer().minimize(D_loss, var_list=theta_D)
G_opt = tf.train.AdamOptimizer().minimize(G_loss, var_list=theta_G)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
if not os.path.exists('out_exercise/'):
os.makedirs('out_exercise/')
i = 0
for it in range(20000):
if it % 2000 == 0:
# 16 幅图
samples = sess.run(generator(Z), feed_dict={Z: noise_maker(16, z_dim)})
fig = plot(samples)
plt.savefig('out_exercise/{}.png'.format(str(i).zfill(3)), bbox_inches='tight')
i += 1
plt.close(fig)
X_mb, _ = mnist.train.next_batch(batch_size)
_, D_loss_curr = sess.run([D_opt, D_loss], feed_dict={X: X_mb, Z: noise_maker(batch_size, z_dim)})
_, G_loss_curr = sess.run([G_opt, G_loss], feed_dict={Z: noise_maker(batch_size, z_dim)})
# sam,fakeprob,fakelogit = sess.run([generator(Z), D_fake_prob, D_fake_logit],
# feed_dict={X: X_mb, Z: noise_maker(batch_size, z_dim)})
if it % 2000 == 0:
print('Iter: {} D_loss: {:.4}, G_loss: {:.4}'.format(it, D_loss_curr, G_loss_curr))
samples.shape
plot(samples)
```
|
github_jupyter
|
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Text classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes this Model Maker library to illustrate the adaption and conversion of a commonly-used text classification model to classify movie reviews on a mobile device.
## Prerequisites
To run this example, we first need to install several required packages, including Model Maker package that in github [repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
```
!pip install git+https://github.com/tensorflow/examples.git#egg=tensorflow-examples[model_maker]
```
Import the required packages.
```
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_maker.core.data_util.text_dataloader import TextClassifierDataLoader
from tensorflow_examples.lite.model_maker.core.task.model_spec import AverageWordVecModelSpec
from tensorflow_examples.lite.model_maker.core.task.model_spec import BertClassifierModelSpec
from tensorflow_examples.lite.model_maker.core.task import text_classifier
```
## Simple End-to-End Example
### Get the data path
Let's get some texts to play with this simple end-to-end example.
```
data_path = tf.keras.utils.get_file(
fname='aclImdb',
origin='http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz',
untar=True)
```
You could replace it with your own text folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_text_classification.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your images to the cloud, you could try to run the library locally following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker) in github.
### Run the example
The example just consists of 6 lines of code as shown below, representing 5 steps of the overall process.
Step 0. Choose a `model_spec` that represents a model for text classifier.
```
model_spec = AverageWordVecModelSpec()
```
Step 1. Load train and test data specific to an on-device ML app and preprocess the data according to specific `model_spec`.
```
train_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'train'), model_spec=model_spec, class_labels=['pos', 'neg'])
test_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'test'), model_spec=model_spec, is_training=False, shuffle=False)
```
Step 2. Customize the TensorFlow model.
```
model = text_classifier.create(train_data, model_spec=model_spec)
```
Step 3. Evaluate the model.
```
loss, acc = model.evaluate(test_data)
```
Step 4. Export to TensorFlow Lite model.
You could download it in the left sidebar same as the uploading part for your own use.
```
model.export(export_dir='.')
```
After this simple 5 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in [text classification](https://github.com/tensorflow/examples/tree/master/lite/examples/text_classification) reference app.
## Detailed Process
In the above, we tried the simple end-to-end example. The following walks through the example step by step to show more detail.
### Step 0: Choose a model_spec that represents a model for text classifier.
each `model_spec` object represents a specific model for the text classifier. Currently, we support averging word embedding model and BERT-base model.
```
model_spec = AverageWordVecModelSpec()
```
### Step 1: Load Input Data Specific to an On-device ML App
The IMDB dataset contains 25000 movie reviews for training and 25000 movie reviews for testing from the [Internet Movie Database](https://www.imdb.com/). The dataset has two classes: positive and negative movie reviews.
Download the archive version of the dataset and untar it.
The IMDB dataset has the following directory structure:
<pre>
<b>aclImdb</b>
|__ <b>train</b>
|______ <b>pos</b>: [1962_10.txt, 2499_10.txt, ...]
|______ <b>neg</b>: [104_3.txt, 109_2.txt, ...]
|______ unsup: [12099_0.txt, 1424_0.txt, ...]
|__ <b>test</b>
|______ <b>pos</b>: [1384_9.txt, 191_9.txt, ...]
|______ <b>neg</b>: [1629_1.txt, 21_1.txt]
</pre>
Note that the text data under `train/unsup` folder are unlabeled documents for unsupervised learning and such data should be ignored in this tutorial.
```
data_path = tf.keras.utils.get_file(
fname='aclImdb',
origin='http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz',
untar=True)
```
Use `TextClassifierDataLoader` to load data.
As for `from_folder()` method, it could load data from the folder. It assumes that the text data of the same class are in the same subdirectory and the subfolder name is the class name. Each text file contains one movie review sample.
Parameter `class_labels` is used to specify which subfolder should be considered. As for `train` folder, this parameter is used to skip `unsup` subfolder.
```
train_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'train'), model_spec=model_spec, class_labels=['pos', 'neg'])
test_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'test'), model_spec=model_spec, is_training=False, shuffle=False)
train_data, validation_data = train_data.split(0.9)
```
### Step 2: Customize the TensorFlow Model
Create a custom text classifier model based on the loaded data. Currently, we support averaging word embedding and BERT-base model.
```
model = text_classifier.create(train_data, model_spec=model_spec, validation_data=validation_data)
```
Have a look at the detailed model structure.
```
model.summary()
```
### Step 3: Evaluate the Customized Model
Evaluate the result of the model, get the loss and accuracy of the model.
Evaluate the loss and accuracy in `test_data`. If no data is given the results are evaluated on the data that's splitted in the `create` method.
```
loss, acc = model.evaluate(test_data)
```
### Step 4: Export to TensorFlow Lite Model
Convert the existing model to TensorFlow Lite model format that could be later used in on-device ML application. Meanwhile, save the text labels in label file and vocabulary in vocab file. The default TFLite filename is `model.tflite`, the default label filename is `label.txt`, the default vocab filename is `vocab`.
```
model.export(export_dir='.')
```
The TensorFlow Lite model file and label file could be used in the [text classification](https://github.com/tensorflow/examples/tree/master/lite/examples/text_classification) reference app.
In detail, we could add `movie_review_classifier.tflite`, `text_label.txt` and `vocab.txt` to the [assets directory](https://github.com/tensorflow/examples/tree/master/lite/examples/text_classification/android/app/src/main/assets) folder. Meanwhile, change the filenames in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/text_classification/android/app/src/main/java/org/tensorflow/lite/examples/textclassification/TextClassificationClient.java#L43).
Here, we also demonstrate how to use the above files to run and evaluate the TensorFlow Lite model.
```
# Read TensorFlow Lite model from TensorFlow Lite file.
with tf.io.gfile.GFile('model.tflite', 'rb') as f:
model_content = f.read()
# Read label names from label file.
with tf.io.gfile.GFile('labels.txt', 'r') as f:
label_names = f.read().split('\n')
# Initialze TensorFlow Lite inpterpreter.
interpreter = tf.lite.Interpreter(model_content=model_content)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
# Run predictions on each test data and calculate accuracy.
accurate_count = 0
for text, label in test_data.dataset:
# Add batch dimension and convert to float32 to match with the model's input
# data format.
text = tf.expand_dims(text, 0)
# Run inference.
interpreter.set_tensor(input_index, text)
interpreter.invoke()
# Post-processing: remove batch dimension and find the label with highest
# probability.
predict_label = np.argmax(output()[0])
# Get label name with label index.
predict_label_name = label_names[predict_label]
accurate_count += (predict_label == label.numpy())
accuracy = accurate_count * 1.0 / test_data.size
print('TensorFlow Lite model accuracy = %.4f' % accuracy)
```
Note that preprocessing for inference should be the same as training. Currently, preprocessing contains split the text to tokens by '\W', encode the tokens to ids, the pad the text with `pad_id` to have the length of `seq_length`.
## Advanced Usage
The `create` function is the critical part of this library in which parameter `model_spec` defines the specification of the model, currently `AverageWordVecModelSpec` and `BertModelSpec` is supported. The `create` function contains the following steps for `AverageWordVecModelSpec`:
1. Tokenize the text and select the top `num_words` most frequent words to generate the vocubulary. The default value of `num_words` in `AverageWordVecModelSpec` object is `10000`.
2. Encode the text string tokens to int ids.
3. Create the text classifier model. Currently, this library supports one model: average the word embedding of the text with RELU activation, then leverage softmax dense layer for classification. As for [Embedding layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding), the input dimension is the size of the vocabulary, the output dimension is `AverageWordVecModelSpec` object's variable `wordvec_dim` which default value is `16`, the input length is `AverageWordVecModelSpec` object's variable `seq_len` which default value is `256`.
4. Train the classifier model. The default epoch is `2` and the default batch size is `32`.
In this section, we describe several advanced topics, including adjusting the model, changing the training hyperparameters etc.
## Adjust the model
We could adjust the model infrastructure like variables `wordvec_dim`, `seq_len` in `AverageWordVecModelSpec` class.
* `wordvec_dim`: Dimension of word embedding.
* `seq_len`: length of sequence.
For example, we could train with larger `wordvec_dim`. If we change the model, we need to construct the new `model_spec` firstly.
```
new_model_spec = AverageWordVecModelSpec(wordvec_dim=32)
```
Secondly, we should get the preprocessed data accordingly.
```
new_train_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'train'), model_spec=new_model_spec, class_labels=['pos', 'neg'])
new_train_data, new_validation_data = new_train_data.split(0.9)
```
Finally, we could train the new model.
```
model = text_classifier.create(new_train_data, model_spec=new_model_spec, validation_data=new_validation_data)
```
### Change the training hyperparameters
We could also change the training hyperparameters like `epochs` and `batch_size` that could affect the model accuracy. For instance,
* `epochs`: more epochs could achieve better accuracy, but may lead to overfitting.
* `batch_size`: number of samples to use in one training step.
For example, we could train with more epochs.
```
model = text_classifier.create(train_data, model_spec=model_spec, validation_data=validation_data, epochs=5)
```
Evaluate the newly retrained model with 5 training epochs.
```
loss, accuracy = model.evaluate(test_data)
```
### Change the Model
We could change the model by changing the `model_spec`. The following shows how we change to BERT-base model.
First, we could change `model_spec` to `BertModelSpec`.
```
model_spec = BertClassifierModelSpec()
```
The remaining steps remains the same.
Load data and preprocess the data according to `model_spec`.
```
train_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'train'), model_spec=model_spec, class_labels=['pos', 'neg'])
test_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'test'), model_spec=model_spec, is_training=False, shuffle=False)
```
Then retrain the model. Note that it could take a long time to retrain the BERT model. we just set `epochs` equals 1 to demonstrate it.
```
model = text_classifier.create(train_data, model_spec=model_spec, epochs=1)
```
|
github_jupyter
|
# Retraining of top performing FFNN
## Imports
```
# General imports
import sys
import os
sys.path.insert(1, os.path.join(os.pardir, 'src'))
from itertools import product
# Data imports
import cv2
import torch
import mlflow
import numpy as np
from mlflow.tracking.client import MlflowClient
from torchvision import datasets, transforms
# Homebrew imports
import model
from utils import one_hot_encode_index
from optimizers import Adam
from activations import Softmax, ReLU
from layers import Dropout, LinearLayer
from loss import CategoricalCrossEntropyLoss
# pytorch imports
from torch import nn, cuda, optim, no_grad
import torch.nn.functional as F
from torchvision import transforms
## TESTING
import importlib
importlib.reload(model)
##
```
## Finding best runs
```
# querying results to see best 2 performing homebrew models
query = "params.data_split = '90/10' and params.type = 'FFNN' and params.framework = 'homebrew'"
hb_runs = MlflowClient().search_runs(
experiment_ids="8",
filter_string=query,
max_results=1,
order_by=["metrics.validation_accuracy DESC"]
)
query = "params.data_split = '90/10' and params.type = 'FFNN' and params.framework = 'pytorch'"
pt_runs = MlflowClient().search_runs(
experiment_ids="8",
filter_string=query,
max_results=1,
order_by=["metrics.validation_accuracy DESC"]
)
```
## Setup data loaders
```
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(32),
transforms.RandomHorizontalFlip(),
transforms.Grayscale(num_output_channels=1),
transforms.ToTensor(),
transforms.Normalize([0.5],[0.5])
])
test_transforms = transforms.Compose([transforms.Resize(33),
transforms.CenterCrop(32),
transforms.Grayscale(num_output_channels=1),
transforms.ToTensor(),
transforms.Normalize([0.5],[0.5])
])
# setting up data loaders
data_dir = os.path.join(os.pardir, 'data', 'Plant_leave_diseases_32')
train_data = datasets.ImageFolder(os.path.join(data_dir, 'train'), transform=train_transforms)
test_data = datasets.ImageFolder(os.path.join(data_dir, 'validation'), transform=test_transforms)
```
## Training 'Homebrew' models
```
# Getting Configs
par = hb_runs[0].data.params
config = {'data_split': par['data_split'],
'decay': np.float64(par['decay']),
'dropout': np.float64(par['dropout']),
'framework': par['framework'],
'learning_rate': np.float64(par['learning_rate']),
'max_epochs': int(par['max_epochs']),
'resolution': int(par['resolution']),
'type': par['type']}
mlflow.set_experiment("Plant Leaf Disease")
train_loader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
validation_loader = torch.utils.data.DataLoader(test_data, batch_size=64, shuffle=True)
# initialize model
mdl = model.Model(Adam(learning_rate=config['learning_rate'], decay=config['decay']),
CategoricalCrossEntropyLoss())
# Config early stop
mdl.add_early_stop(25)
# save config
mdl.set_save_config(model_name='FFNN_top_homebrew', save_path=os.path.join('models'))
# Defining architecture
mdl.set_sequence([
LinearLayer(32*32, 1024),
ReLU(),
Dropout(config['dropout']),
LinearLayer(1024, 512),
ReLU(),
Dropout(config['dropout']),
LinearLayer(512, 39),
Softmax()
])
with mlflow.start_run():
mlflow.log_params(config)
mdl.train_with_loader(train_loader, epochs=config['max_epochs'], validation_loader=validation_loader, cls_count=39, flatten_input=True)
```
### Training pytorch Mode
```
#### Net and training function
class PlantDiseaseNet(nn.Module):
def __init__(self, input_size=1024, l1=1024, l2=512, output_size=39, dropout_p=0.5):
super(PlantDiseaseNet, self).__init__()
self.fc1 = nn.Linear(input_size, l1)
self.fc2 = nn.Linear(l1, l2)
self.fc3 = nn.Linear(l2, output_size)
self.dropout = nn.Dropout(dropout_p)
def forward(self, x):
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.dropout(x)
x = F.log_softmax(self.fc3(x), dim=1)
return x
def train(model, train_loader, validation_loader, config, n_epochs=10, stopping_treshold=None):
if torch.cuda.is_available():
print('CUDA is available! Training on GPU ...')
model.cuda()
# Loss and optimizer setup
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=config['learning_rate'])
# Setting minimum validation loss to inf
validation_loss_minimum = np.Inf
train_loss_history = []
validation_loss_history = []
for epoch in range(1, n_epochs +1):
training_loss = 0.0
validation_loss = 0.0
# Training loop
training_accuracies = []
for X, y in train_loader:
# Moving data to gpu if using
if torch.cuda.is_available():
X, y = X.cuda(), y.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(X)
# calculate the batch loss
loss = criterion(output, y)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
training_loss += loss.item()*X.size(0)
# calculating accuracy
ps = torch.exp(output)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == y.view(*top_class.shape)
training_accuracies.append(torch.mean(equals.type(torch.FloatTensor)).item())
# Validation Loop
with torch.no_grad():
accuracies = []
for X, y in validation_loader:
# Moving data to gpu if using
if torch.cuda.is_available():
X, y = X.cuda(), y.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(X)
# calculate the batch loss
loss = criterion(output, y)
# update validation loss
validation_loss += loss.item()*X.size(0)
# calculating accuracy
ps = torch.exp(output)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == y.view(*top_class.shape)
accuracies.append(torch.mean(equals.type(torch.FloatTensor)).item())
# Mean loss
mean_training_loss = training_loss/len(train_loader.sampler)
mean_validation_loss = validation_loss/len(validation_loader.sampler)
mean_train_accuracy = sum(training_accuracies)/len(training_accuracies)
mean_accuracy = sum(accuracies)/len(accuracies)
train_loss_history.append(mean_training_loss)
validation_loss_history.append(mean_validation_loss)
# Printing epoch stats
print(f'Epoch: {epoch}/{n_epochs}, ' +\
f'Training Loss: {mean_training_loss:.3f}, '+\
f'Train accuracy {mean_train_accuracy:.3f} ' +\
f'Validation Loss: {mean_validation_loss:.3f}, '+\
f'Validation accuracy {mean_accuracy:.3f}')
# logging with mlflow
if mlflow.active_run():
mlflow.log_metric('loss', mean_training_loss, step=epoch)
mlflow.log_metric('accuracy', mean_train_accuracy, step=epoch)
mlflow.log_metric('validation_accuracy', mean_accuracy, step=epoch)
mlflow.log_metric('validation_loss', mean_validation_loss, step=epoch)
# Testing for early stopping
# Testing for early stopping
if stopping_treshold:
if mean_validation_loss < validation_loss_minimum:
validation_loss_minimum = mean_validation_loss
print('New minimum validation loss (saving model)')
save_pth = os.path.join('models',f'{config["name"]}.pt')
torch.save(model.state_dict(), save_pth)
elif len([v for v in validation_loss_history[-stopping_treshold:] if v > validation_loss_minimum]) >= stopping_treshold:
print(f"Stopping early at epoch: {epoch}/{n_epochs}")
break
```
### Training Pytorch models
```
# getting configs
# # Getting Configs
par = pt_runs[0].data.params
config = {'data_split': par['data_split'],
'decay': np.float64(par['decay']),
'dropout': np.float64(par['dropout']),
'framework': par['framework'],
'learning_rate': np.float64(par['learning_rate']),
'max_epochs': int(par['max_epochs']),
'resolution': int(par['resolution']),
'type': par['type'],
'name': 'top_pytorch'}
# Set up data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
validation_loader = torch.utils.data.DataLoader(test_data, batch_size=64, shuffle=True)
# Initializing the model
mdl = PlantDiseaseNet(input_size=config['resolution']**2, dropout_p=config['dropout'])
print("Starting training on network: \n", mdl)
mlflow.set_experiment("Plant Leaf Disease")
with mlflow.start_run():
mlflow.log_params(config)
tlh, vlh = train(mdl, train_loader, validation_loader, config, n_epochs=config['max_epochs'], stopping_treshold=50)
```
|
github_jupyter
|
# Lightweight python components
Lightweight python components do not require you to build a new container image for every code change. They're intended to use for fast iteration in notebook environment.
**Building a lightweight python component**
To build a component just define a stand-alone python function and then call kfp.components.func_to_container_op(func) to convert it to a component that can be used in a pipeline.
There are several requirements for the function:
- The function should be stand-alone. It should not use any code declared outside of the function definition. Any imports should be added inside the main function. Any helper functions should also be defined inside the main function.
- The function can only import packages that are available in the base image. If you need to import a package that's not available you can try to find a container image that already includes the required packages. (As a workaround you can use the module subprocess to run pip install for the required package. There is an example below in my_divmod function.)
- If the function operates on numbers, the parameters need to have type hints. Supported types are [int, float, bool]. Everything else is passed as string.
- To build a component with multiple output values, use the typing.NamedTuple type hint syntax: NamedTuple('MyFunctionOutputs', [('output_name_1', type), ('output_name_2', float)])
```
# Install the dependency packages
!pip install --upgrade pip
!pip install numpy tensorflow kfp-tekton
```
**Important**: If you are running this notebook using the Kubeflow Jupyter Server, you need to restart the Python **Kernel** because the packages above overwrited some default packages inside the Kubeflow Jupyter image.
```
import kfp
import kfp.components as comp
```
Simple function that just add two numbers:
```
#Define a Python function
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
return a + b
```
Convert the function to a pipeline operation
```
add_op = comp.func_to_container_op(add)
```
A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs.
```
#Advanced function
#Demonstrates imports, helper functions and multiple outputs
from typing import NamedTuple
def my_divmod(dividend: float, divisor:float) -> NamedTuple('MyDivmodOutput', [('quotient', float), ('remainder', float), ('mlpipeline_ui_metadata', 'UI_metadata'), ('mlpipeline_metrics', 'Metrics')]):
'''Divides two numbers and calculate the quotient and remainder'''
#Pip installs inside a component function.
#NOTE: installs should be placed right at the beginning to avoid upgrading a package
# after it has already been imported and cached by python
import sys, subprocess;
subprocess.run([sys.executable, '-m', 'pip', 'install', 'tensorflow==1.8.0'])
#Imports inside a component function:
import numpy as np
#This function demonstrates how to use nested functions inside a component function:
def divmod_helper(dividend, divisor):
return np.divmod(dividend, divisor)
(quotient, remainder) = divmod_helper(dividend, divisor)
from tensorflow.python.lib.io import file_io
import json
# Exports a sample tensorboard:
metadata = {
'outputs' : [{
'type': 'tensorboard',
'source': 'gs://ml-pipeline-dataset/tensorboard-train',
}]
}
# Exports two sample metrics:
metrics = {
'metrics': [{
'name': 'quotient',
'numberValue': float(quotient),
},{
'name': 'remainder',
'numberValue': float(remainder),
}]}
from collections import namedtuple
divmod_output = namedtuple('MyDivmodOutput', ['quotient', 'remainder', 'mlpipeline_ui_metadata', 'mlpipeline_metrics'])
return divmod_output(quotient, remainder, json.dumps(metadata), json.dumps(metrics))
```
Test running the python function directly
```
my_divmod(100, 7)
```
#### Convert the function to a pipeline operation
You can specify an alternative base container image (the image needs to have Python 3.5+ installed).
```
divmod_op = comp.func_to_container_op(my_divmod, base_image='tensorflow/tensorflow:1.11.0-py3')
```
#### Define the pipeline
Pipeline function has to be decorated with the `@dsl.pipeline` decorator
```
import kfp.dsl as dsl
@dsl.pipeline(
name='Calculation pipeline',
description='A toy pipeline that performs arithmetic calculations.'
)
# Currently kfp-tekton doesn't support pass parameter to the pipelinerun yet, so we hard code the number here
def calc_pipeline(
a='7',
b='8',
c='17',
):
#Passing pipeline parameter and a constant value as operation arguments
add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance.
#Passing a task output reference as operation arguments
#For an operation with a single return value, the output reference can be accessed using `task.output` or `task.outputs['output_name']` syntax
divmod_task = divmod_op(add_task.output, b)
#For an operation with a multiple return values, the output references can be accessed using `task.outputs['output_name']` syntax
result_task = add_op(divmod_task.outputs['quotient'], c)
```
Compile and run the pipeline into Tekton yaml using kfp-tekton SDK
```
# Specify pipeline argument values
arguments = {'a': '7', 'b': '8'}
# Specify Kubeflow Pipeline Host
host=None
# Submit a pipeline run using the KFP Tekton client.
from kfp_tekton import TektonClient
TektonClient(host=host).create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
# For Argo users, submit the pipeline run using the below client.
# kfp.Client(host=host).create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
```
|
github_jupyter
|
# Predicting Student Admissions with Neural Networks
In this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:
- GRE Scores (Test)
- GPA Scores (Grades)
- Class rank (1-4)
The dataset originally came from here: http://www.ats.ucla.edu/
## Loading the data
To load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:
- https://pandas.pydata.org/pandas-docs/stable/
- https://docs.scipy.org/
```
# Importing pandas and numpy
import pandas as pd
import numpy as np
# Reading the csv file into a pandas DataFrame
data = pd.read_csv('student_data.csv')
# Printing out the first 10 rows of our data
data[:10]
```
## Plotting the data
First let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.
```
# Importing matplotlib
import matplotlib.pyplot as plt
# Function to help us plot
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
# Plotting the points
plot_points(data)
plt.show()
```
Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
```
# Separating the ranks
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
# Plotting the graphs
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
```
This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.
## TODO: One-hot encoding the rank
Use the `get_dummies` function in numpy in order to one-hot encode the data.
```
# TODO: Make dummy variables for rank
one_hot_data = pd.concat([data, pd.get_dummies(data["rank"],prefix="rank")],axis=1)
# TODO: Drop the previous rank column
one_hot_data = one_hot_data.drop("rank", axis=1)
# Print the first 10 rows of our data
one_hot_data[:10]
```
## TODO: Scaling the data
The next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.
```
# Making a copy of our data
processed_data = one_hot_data[:]
# TODO: Scale the columns
processed_data["gre"] = processed_data["gre"] / 800
processed_data["gpa"] = processed_data["gpa"] / 4.0
# Printing the first 10 rows of our procesed data
processed_data[:10]
```
## Splitting the data into Training and Testing
In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.
```
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)
train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)
print("Number of training samples is", len(train_data))
print("Number of testing samples is", len(test_data))
print(train_data[:10])
print(test_data[:10])
```
## Splitting the data into features and targets (labels)
Now, as a final step before the training, we'll split the data into features (X) and targets (y).
```
features = train_data.drop('admit', axis=1)
targets = train_data['admit']
features_test = test_data.drop('admit', axis=1)
targets_test = test_data['admit']
print(features[:10])
print(targets[:10])
```
## Training the 2-layer Neural Network
The following function trains the 2-layer neural network. First, we'll write some helper functions.
```
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x) * (1-sigmoid(x))
def error_formula(y, output):
return - y*np.log(output) - (1 - y) * np.log(1-output)
```
# TODO: Backpropagate the error
Now it's your turn to shine. Write the error term. Remember that this is given by the equation $$ -(y-\hat{y}) \sigma'(x) $$
```
# TODO: Write the error term formula
def error_term_formula(y, output):
return -(y - output) * output * (output - 1)
# Neural Network hyperparameters
epochs = 1000
learnrate = 0.5
# Training function
def train_nn(features, targets, epochs, learnrate):
# Use to same seed to make debugging easier
np.random.seed(42)
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features.values, targets):
# Loop through all records, x is the input, y is the target
# Activation of the output unit
# Notice we multiply the inputs and the weights here
# rather than storing h as a separate variable
output = sigmoid(np.dot(x, weights))
# The error, the target minus the network output
error = error_formula(y, output)
# The error term
# Notice we calulate f'(h) here instead of defining a separate
# sigmoid_prime function. This just makes it faster because we
# can re-use the result of the sigmoid function stored in
# the output variable
error_term = error_term_formula(y, output)
# The gradient descent step, the error times the gradient times the inputs
del_w += error_term * x
# Update the weights here. The learning rate times the
# change in weights, divided by the number of records to average
weights += learnrate * del_w / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
out = sigmoid(np.dot(features, weights))
loss = np.mean((out - targets) ** 2)
print("Epoch:", e)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
print("=========")
print("Finished training!")
return weights
weights = train_nn(features, targets, epochs, learnrate)
```
## Calculating the Accuracy on the Test Data
```
# Calculate accuracy on test data
tes_out = sigmoid(np.dot(features_test, weights))
predictions = tes_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
```
|
github_jupyter
|
```
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing import image
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import classification_report
from imutils import paths
import matplotlib.pyplot as plt
plt.style.use("seaborn")
import numpy as np
from numpy import expand_dims
import pandas as pd
import random
from pathlib import Path
from IPython.display import display
from PIL import Image
import pickle
import glob
import os
import cv2
from google.colab import drive
drive.mount('/drive')
os.listdir('/drive/My Drive/FaceMask/data')
def loadData(path,dataFrame):
data = []
for i in range(len(dataFrame)):
data.append(path+dataFrame['filename'][i])
return data
def loadImages(listPath, img_size):
images = []
for img in listPath:
z= image.load_img(img,target_size=img_size)
r = image.img_to_array(z)
r = preprocess_input(r)
images.append(r)
return np.array(images)
def loadLabels(dataFrame):
labels = []
for row in range(len(dataFrame)):
if dataFrame["class"][row] == 'with_mask':
y= [1.0, 0.0]
else:
y=[0.0, 1.0]
labels.append(y)
return np.array(labels,dtype="float32")
```
##### Load data train path
```
path = "/drive/My Drive/FaceMask/data/train/"
train_csv_df = pd.DataFrame(pd.read_csv("/drive/My Drive/FaceMask/data/train.csv"))
train_csv_df.head()
imgPath = loadData(path,train_csv_df)
```
##### Load data test path
```
testPath = "/drive/My Drive/FaceMask/data/test/"
test_csv_df = pd.DataFrame(pd.read_csv("/drive/My Drive/FaceMask/data/test.csv"))
test_csv_df.head()
imgTest = loadData(testPath,test_csv_df)
```
### Get data train and data test
```
train_images_array = loadImages(imgPath, (300,300))
test_images_array = loadImages(imgTest, (224,224))
```
### Get the labels
```
train_labels_array = loadLabels(train_csv_df)
test_labels_array = loadLabels(test_csv_df)
```
### Augmentasi data train
```
aug = ImageDataGenerator(
rotation_range=20,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest")
# Loading the MobileNetV2 network, with the topmost layer removed
base_model = MobileNetV2(weights="imagenet", include_top=False,
input_tensor=Input(shape=(224, 224, 3)))
# Freeze the layer of the base model to make them untrainable.
# This ensures that their weights are not updated when we train the model.
for layer in base_model.layers:
layer.trainable = False
# Construct head of the model that will be attached on top of the base model:
head_model = base_model.output
head_model = AveragePooling2D(pool_size=(7, 7))(head_model)
head_model = Flatten(name="flatten")(head_model)
head_model = Dense(128, activation="relu")(head_model)
head_model = Dropout(0.5)(head_model)
head_model = Dense(2, activation="softmax")(head_model)
# Combine the head and base of the models together:
my_model = Model(inputs=base_model.input, outputs=head_model)
my_model.summary()
INIT_LR = 1e-4
EPOCHS = 20
BATCH_SIZE = 32
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
my_model.compile(loss="binary_crossentropy", optimizer=opt,
metrics=["accuracy"])
history = my_model.fit(
aug.flow(train_images_array, train_labels_array, batch_size=BATCH_SIZE),
steps_per_epoch=len(train_images_array) // BATCH_SIZE,
validation_data=(test_images_array, test_labels_array),
validation_steps=len(test_images_array)//BATCH_SIZE,
epochs=EPOCHS)
my_model.save("/drive/My Drive/FaceMask/model.h5")
```
## EVALUATE MODEL
```
results = my_model.evaluate(test_images_array, test_labels_array, batch_size=128)
```
## PLOT RESULT
```
train_acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
train_loss = history.history['loss']
val_loss = history.history['val_loss']
plt.rcParams['figure.figsize'] = [10, 5]
# Plot training & validation accuracy values
plt.plot(train_acc)
plt.plot(val_acc)
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.savefig('/drive/My Drive/FaceMask/accuracy.png')
plt.legend(['Train', 'Test'])
# Plot training & validation loss values
plt.plot(train_loss)
plt.plot(val_loss)
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.savefig('/drive/My Drive/FaceMask/lost.png')
plt.legend(['Train', 'Test'])
```
## PREDICT
```
my_model = load_model('D:\PROGRAMING\Python\Face Mask/model.h5')
img = image.load_img('D:\PROGRAMING\Python\Face Mask/1.jpg',target_size=(224,224))
plt.imshow(img)
plt.axis('off')
plt.show()
img = np.array(img, dtype='float')
img = img.reshape(1, 224, 224, 3)
prediksi = my_model.predict(img)
idx = np.argmax(prediksi)
percentage = "%.2f" % (prediksi[0][idx] * 100)
print(str(percentage)+" %")
if (idx):
print("Wearing Masker")
else:
print("Not Wearing Masker")
```
|
github_jupyter
|
```
import matplotlib.pyplot as plt
from matplotlib import cm, colors, rcParams
import numpy as np
import bayesmark.constants as cc
from bayesmark.path_util import abspath
from bayesmark.serialize import XRSerializer
from bayesmark.constants import ITER, METHOD, TEST_CASE, OBJECTIVE, VISIBLE_TO_OPT
# User settings, must specify location of the data to make plots here for this to run
DB_ROOT = abspath(".")
DBID = "bo_example_folder"
metric_for_scoring = VISIBLE_TO_OPT
# Matplotlib setup
# Note this will put type-3 font BS in the pdfs, if it matters
rcParams["mathtext.fontset"] = "stix"
rcParams["font.family"] = "STIXGeneral"
def build_color_dict(names):
"""Make a color dictionary to give each name a mpl color.
"""
norm = colors.Normalize(vmin=0, vmax=1)
m = cm.ScalarMappable(norm, cm.tab20)
color_dict = m.to_rgba(np.linspace(0, 1, len(names)))
color_dict = dict(zip(names, color_dict))
return color_dict
# Load the data
agg_results_ds, meta = XRSerializer.load_derived(DB_ROOT, db=DBID, key=cc.PERF_RESULTS)
# Setup for plotting
method_list = agg_results_ds.coords[METHOD].values
method_to_rgba = build_color_dict(method_list.tolist())
# Make the plots for inidividual test functions
for func_name in agg_results_ds.coords[TEST_CASE].values:
plt.figure(figsize=(5, 5), dpi=300)
for method_name in method_list:
curr_ds = agg_results_ds.sel({TEST_CASE: func_name, METHOD: method_name, OBJECTIVE: metric_for_scoring})
plt.fill_between(
curr_ds.coords[ITER].values,
curr_ds[cc.LB_MED].values,
curr_ds[cc.UB_MED].values,
color=method_to_rgba[method_name],
alpha=0.5,
)
plt.plot(
curr_ds.coords[ITER].values,
curr_ds[cc.PERF_MED].values,
color=method_to_rgba[method_name],
label=method_name,
marker=".",
)
plt.xlabel("evaluation", fontsize=10)
plt.ylabel("median score", fontsize=10)
plt.title(func_name)
plt.legend(fontsize=8, bbox_to_anchor=(1.05, 1), loc="upper left", borderaxespad=0.0)
plt.grid()
plt.figure(figsize=(5, 5), dpi=300)
for method_name in method_list:
curr_ds = agg_results_ds.sel({TEST_CASE: func_name, METHOD: method_name, OBJECTIVE: metric_for_scoring})
plt.fill_between(
curr_ds.coords[ITER].values,
curr_ds[cc.LB_MEAN].values,
curr_ds[cc.UB_MEAN].values,
color=method_to_rgba[method_name],
alpha=0.5,
)
plt.plot(
curr_ds.coords[ITER].values,
curr_ds[cc.PERF_MEAN].values,
color=method_to_rgba[method_name],
label=method_name,
marker=".",
)
plt.xlabel("evaluation", fontsize=10)
plt.ylabel("mean score", fontsize=10)
plt.title(func_name)
plt.legend(fontsize=8, bbox_to_anchor=(1.05, 1), loc="upper left", borderaxespad=0.0)
plt.grid()
```
|
github_jupyter
|
# Elementare Datentypen
*Erinnerung:* Beim Deklarieren einer Variable muss man deren Datentyp angeben oder er muss eindeutig erkennbar sein.
Die beiden folgenden Anweisungen erzeugen beide eine Variable vom Typ `int`:
var a int
b := 42
Bisher haben wir nur einen Datentyp benutzt: `int`. Dieser Typ steht für ganze Zahlen, also positive oder negative Zahlen ohne Nachkommastellen. Go stellt eine ganze Reihe von Datentypen bereit, für verschiedene Arten von Daten oder auch für verschiedene Datenbereiche.
## Ganzzahlige Datentypen
```
var i1 int // Zahl mit Vorzeichen
var i2 int32 // 32-Bit-Zahl mit Vorzeichen
var i3 int64 // 64-Bit-Zahl mit Vorzeichen
var i4 uint // Zahl ohne Vorzeichen
var i5 uint32 // 32-Bit-Zahl ohne Vorzeichen
var i6 uint64 // 64-Bit-Zahl ohne Vorzeichen
var i7 byte // Sonderfall: 8 Bit ohne Vorzeichen, meist als Buchstaben verwendet
i7 := 42 // Automatische Typinferenz, hieraus wird `int`
```
### Maximale Wertebereiche von `int`-Datentypen
Die meisten Datentypen haben einen festen, begrenzten Wertebereich, d.h. es gibt eine größte und eine kleinste Zahl, die man damit speichern kann.
Diese Einschränkung liegt daran, dass eine feste Anzahl an Ziffern verwendet wird. Reichen diese Ziffern nicht mehr aus, gehen Informationen verloren.
```
^uint(0) // Größter Wert für ein uint
int(^uint(0) >> 1) // Größter Wert für ein int
-int(^uint(0) >> 1)-1 // Größter Wert für ein int
^uint32(0) >> 1 // Größter Wert für ein uint32
^uint64(0) >> 1 // Größter Wert für ein int64
```
### Überläufe
Überschreitet man den maximalen Wert eines Datentypen, so geschieht ein *Überlauf*:
```
^uint(0)+1
int32(^uint32(0) >> 1)+1
```
## Fließkomma-Datentypen
Neben den ganzzahligen Datentypen gibt es auch zwei *Gleitkomma*-Datentypen, die zur Darstellung von Kommazahlen mit einer variablen Anzahl an Nachkommastellen gedacht sind:
```
var f1 float32 = 42
var f2 float64 = 23.5
```
Gleitkommazahlen werden z.B. gebraucht, um Divisionen, Wurzelberechnungen etc. durchzuführen.
Sie werden intern in der Form $m \cdot b^e$ dargestellt, d.h. z.B. ist $234,567 = 2,23456 * 10^2$.
Ein Problem dabei ist, dass für die Darstellung von *Mantisse* ($m$) und *Exponent* ($e$) nur eine begrenzte Anzahl an Bits zur Verfügung steht.
Dadurch ist die Genauigkeit bei der Darstellung von und Rechnung mit Gleitkommazahlen begrenzt. Die folgenden Beispiele demonstrieren das:
```
a, b := 5.67, 8.97
a - b
var x float64 = 1.01
var i float64 = 0.01
for x < 1.4 {
println(x)
x += i
}
```
## Wahrheitswerte
Ein weiterer wichtiger Datentyp sind die Wahrheitswerte `true` und `false`.
Wie der Name schon andeutet, dienen sie zur Darstellung von Auswertungen, ob etwas *wahr* oder *falsch* ist.
Bspw. ist der Vergleich `42 == 6 * 7` wahr, die Aussage `42 > 15` jedoch falsch.
```
var b1 bool
b1
```
Mit Wahrheitswerten wird z.B. bei bedingten Sprüngen und Schleifen gerechnet, um die Bedingungen auszuwerten.
Oft schreibt man Funktionen, die komplexere Zusammenhänge prüfen sollen und die einen Wert vom Typ `bool` liefern.
Als kleines Beispiel prüft die folgende Funktion, ob ihr Parameter eine ungerade Zahl ist:
```
func is_odd(n int) bool {
return n % 2 != 0
}
is_odd(3)
```
|
github_jupyter
|
# Trabalhando com o Jupyter
Ferramenta que permite criação de código, visualização de resultados e documentação no mesmo documento (.ipynb)
**Modo de comando:** `esc` para ativar, o cursor fica inativo
**Modo de edição:** `enter` para ativar, modo de inserção
### Atalhos do teclado (MUITO úteis)
Para usar os atalhos descritos abaixo a célula deve estar selecionada porém não pode estar no modo de edição.
* Para entrar do modo de comando: `esc`
* Criar nova célula abaixo: `b` (elow)
* Criar nova célula acima: `a` (bove)
* Recortar uma célula: `x`
* Copiar uma célula: `c`
* Colar uma cálula: `v`
* Executar uma célula e permanecer nela mesma: `ctrl + enter`
* Executar uma célula e mover para a próxima: `shift + enter`
* ** Para ver todos os atalhos, tecle `h`**
### Tipos de célula
**Code:** Para código Python
**Markdown:** Para documentação
Também existem **Raw NBConverter** e **Heading**
# Pandas (http://pandas.pydata.org/)
* Biblioteca Python para análise de dados
* Provê ferramentas de alta performance e fácil usabilidade para análise de dados
### Como instalar
* Anaconda (http://pandas.pydata.org/pandas-docs/stable/install.html#installing-pandas-with-anaconda)
* Download anaconda: https://www.continuum.io/downloads
* Instalar Anaconda: https://docs.continuum.io/anaconda/install
* Disponível para `osx-64`, `linux-64`, `linux-32`, `win-64`, `win-32` e `Python 2.7`, `Python 3.4`, e `Python 3.5`
* `conda install pandas`
* Pip
* `pip install pandas`
# Matplotlib (http://matplotlib.org/)
* Biblioteca Python para plotar gráficos 2D
### Como instalar
* Anaconda (http://pandas.pydata.org/pandas-docs/stable/install.html#installing-pandas-with-anaconda)
* Download anaconda: https://www.continuum.io/downloads
* Instalar Anaconda: https://docs.continuum.io/anaconda/install
* Disponível para `osx-64`, `linux-64`, `linux-32`, `win-64`, `win-32` e `Python 2.7`, `Python 3.4`, e `Python 3.5`
* `conda install matplotlib`
* Pip
* `pip install matplotlib`
```
import pandas as pd
import matplotlib
%matplotlib inline
```
### Carregando um arquivo csv em um DataFrame do Pandas
* `pd.DataFrame.from_csv(file_name)`
Se, ao usar este comando, você se deparar com um UnicodeDecodingError, adicione o parâmetro `encoding='utf-8'`
## cast.csv
## release_dates.csv
## titles
**`df.head(n):`**
* Visualizar as primeiras *n* linhas.
* Default: *n = 5*.
**`df.tail(n):`**
* Visualizar as últimas *n* linhas.
* Default: *n = 5*.
### Quantos registros há no conjunto?
**`len(df)`:**
* Tamanho do df
### Quais são os possíveis valores para a coluna `type`?
**`df[col]`:**
* Visualizar uma coluna do df
ou
**`df.col`:**
* Se o nome da coluna não tiver, espaços, caracteres especiais ou for uma variável
**Obs:** Ao selecionar uma coluna e manipulá-la fora de um DataFrame, a mesma é tratada como uma **Série**.
**`df[col].unique()`:**
* Mostrar os possíveis valores de uma coluna
### Quantos atores e quantas atrizes há no conjunto?
**df[col].value_counts():**
* Contagem de quantos registros há para cada valor possível da coluna col (somente se col for categórica)
### Operações com colunas
#### Operações Aritméticas
#### Comparações
#### Filtrar
* Por valor específico de uma coluna
* Por colunas
* Por valor nulo ou não nulo
* Por vetor de booleanos
* Preencher valores nulos
Por DataFrame
Por coluna
### Quantos atores atuaram em cada ano?
### Qual foi a diferença entre o número de atores e atrizes que atuaram em cada década?
### Datas
### Quanto % dos filmes foram lançados na sexta-feira?
### Merge
### Qual o nome e ano do filme mais antigo?
### Quantos filmes são de 1960?
### Quantos filmes são de cada ano dos anos 70?
### Quantos filmes foram lançados desde o ano que você nasceu até hoje?
### Quais são os nomes dos filmes de 1906?
### Quais são os 15 nomes de filmes mais comuns?
### Em quantos filmes Judi Dench atuou?
### Liste os filmes nos quais Judi Dench atuou como o ator número 1, ordenado por ano.
### Liste os atores da versão de 1972 de Sleuth pela ordem do rank n.
### Quais atores mais atuaram em 1985?
# SciKit Learn (http://scikit-learn.org)
* Biblioteca Python para mineração e análise de dados
### Como instalar
* Anaconda (http://pandas.pydata.org/pandas-docs/stable/install.html#installing-pandas-with-anaconda)
* Download anaconda: https://www.continuum.io/downloads
* Instalar Anaconda: https://docs.continuum.io/anaconda/install
* Disponível para `osx-64`, `linux-64`, `linux-32`, `win-64`, `win-32` e `Python 2.7`, `Python 3.4`, e `Python 3.5`
* `conda install scikit-learn`
* Pip
* `pip install -U scikit-learn`
```
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import confusion_matrix
from sklearn.cross_validation import train_test_split
import pickle
import time
time1=time.strftime('%Y-%m-%d_%H-%M-%S')
```
### iris.csv
### Treinar modelo de Árvore de Decisão
### Salvar modelo
```
```
### Carregar modelo
### Predição para casos de teste
|
github_jupyter
|
<a href="https://colab.research.google.com/github/JoanesMiranda/Machine-learning/blob/master/Autoenconder.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Importando as bibliotecas necessárias
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.datasets import mnist
```
### Carregando a base de dados
```
(x_train, y_train),(x_test, y_test) = mnist.load_data()
```
### Plotando uma amostra das imagens
```
plt.imshow(x_train[10], cmap="gray")
```
### Aplicando normalização nos dados de treino e teste
```
x_train = x_train / 255.0
x_test = x_test / 255.0
print(x_train.shape)
print(x_test.shape)
```
### Adicionando ruido a base de treino
```
noise = 0.3
noise_x_train = []
for img in x_train:
noisy_image = img + noise * np.random.randn(*img.shape)
noisy_image = np.clip(noisy_image, 0., 1.)
noise_x_train.append(noisy_image)
noise_x_train = np.array(noise_x_train)
print(noise_x_train.shape)
```
### Plotando uma amostra da imagem com o ruido aplicado
```
plt.imshow(noise_x_train[10], cmap="gray")
```
### Adicionando ruido a base de teste
```
noise = 0.3
noise_x_test = []
for img in x_train:
noisy_image = img + noise * np.random.randn(*img.shape)
noisy_image = np.clip(noisy_image, 0., 1.)
noise_x_test.append(noisy_image)
noise_x_test = np.array(noise_x_test)
print(noise_x_test.shape)
```
### Plotando uma amostra da imagem com o ruido aplicado
```
plt.imshow(noise_x_test[10], cmap="gray")
noise_x_train = np.reshape(noise_x_train,(-1, 28, 28, 1))
noise_x_test = np.reshape(noise_x_test,(-1, 28, 28, 1))
print(noise_x_train.shape)
print(noise_x_test.shape)
```
### Autoencoder
```
x_input = tf.keras.layers.Input((28,28,1))
# encoder
x = tf.keras.layers.Conv2D(filters=16, kernel_size=3, strides=2, padding='same')(x_input)
x = tf.keras.layers.Conv2D(filters=8, kernel_size=3, strides=2, padding='same')(x)
# decoder
x = tf.keras.layers.Conv2DTranspose(filters=16, kernel_size=3, strides=2, padding='same')(x)
x = tf.keras.layers.Conv2DTranspose(filters=1, kernel_size=3, strides=2, activation='sigmoid', padding='same')(x)
model = tf.keras.models.Model(inputs=x_input, outputs=x)
model.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(lr=0.001))
model.summary()
```
### Treinando os dados
```
model.fit(noise_x_train, x_train, batch_size=100, validation_split=0.1, epochs=10)
```
### Realizando a predição das imagens usando os dados de teste com o ruido aplicado
```
predicted = model.predict(noise_x_test)
predicted
```
### Plotando as imagens com ruido e depois de aplicar o autoencoder
```
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
for images, row in zip([noise_x_test[:10], predicted], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_3_keras_l1_l2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 5: Regularization and Dropout**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 5 Material
* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_1_reg_ridge_lasso.ipynb)
* Part 5.2: Using K-Fold Cross Validation with Keras [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_2_kfold.ipynb)
* **Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting** [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_3_keras_l1_l2.ipynb)
* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_4_dropout.ipynb)
* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_5_bootstrap.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting
L1 and L2 regularization are two common regularization techniques that can reduce the effects of overfitting (Ng, 2004). Both of these algorithms can either work with an objective function or as a part of the backpropagation algorithm. In both cases the regularization algorithm is attached to the training algorithm by adding an additional objective.
Both of these algorithms work by adding a weight penalty to the neural network training. This penalty encourages the neural network to keep the weights to small values. Both L1 and L2 calculate this penalty differently. For gradient-descent-based algorithms, such as backpropagation, you can add this penalty calculation to the calculated gradients. For objective-function-based training, such as simulated annealing, the penalty is negatively combined with the objective score.
Both L1 and L2 work differently in the way that they penalize the size of a weight. L2 will force the weights into a pattern similar to a Gaussian distribution; the L1 will force the weights into a pattern similar to a Laplace distribution, as demonstrated the following:

As you can see, L1 algorithm is more tolerant of weights further from 0, whereas the L2 algorithm is less tolerant. We will highlight other important differences between L1 and L2 in the following sections. You also need to note that both L1 and L2 count their penalties based only on weights; they do not count penalties on bias values.
Tensor flow allows [l1/l2 to be directly added to your network](http://tensorlayer.readthedocs.io/en/stable/modules/cost.html).
```
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
########################################
# Keras with L1/L2 for Regression
########################################
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras import regularizers
# Cross-validate
kf = KFold(5, shuffle=True, random_state=42)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
#kernel_regularizer=regularizers.l2(0.01),
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1],
activation='relu',
activity_regularizer=regularizers.l1(1e-4))) # Hidden 1
model.add(Dense(25, activation='relu',
activity_regularizer=regularizers.l1(1e-4))) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
pred = np.argmax(pred,axis=1) # raw probabilities to chosen class (highest probability)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
```
|
github_jupyter
|
# Saving a web page to scrape later
For many scraping jobs, it makes sense to first save a copy of the web page (or pages) that you want to scrape and then operate on the local files you've saved. This is a good practice for a couple of reasons: You won't be bombarding your target server with requests every time you fiddle with your script and rerun it, and you've got a copy saved in case the page (or pages) disappear.
Here's one way to accomplish that. (If you haven't run through [the notebook on using `requests` to fetch web pages](02.%20Fetching%20HTML%20with%20requests.ipynb), do that first.)
We'll need the `requests` and `bs4` libraries, so let's start by importing them:
```
import requests
import bs4
```
## Fetch the page and write to file
Let's grab the Texas death row page: `'https://www.tdcj.texas.gov/death_row/dr_offenders_on_dr.html'`
```
dr_page = requests.get('https://www.tdcj.texas.gov/death_row/dr_offenders_on_dr.html')
# take a peek at the HTML
dr_page.text
```
Now, instead of continuing on with our scraping journey, we'll use some built-in Python tools to write this to file:
```
# define a name for the file we're saving to
HTML_FILE_NAME = 'death-row-page.html'
# open that file in write mode and write the page's HTML into it
with open(HTML_FILE_NAME, 'w') as outfile:
outfile.write(dr_page.text)
```
The `with` block is just a handy way to deal with opening and closing files -- note that everything under the `with` line is indented.
The `open()` function is used to open files for reading or writing. The first _argument_ that you hand this function is the name of the file you're going to be working on -- we defined it above and attached it to the `HTML_FILE_NAME` variable, which is totally arbitrary. (We could have called it `HTML_BANANAGRAM` if we wanted to.)
The `'w'` means that we're opening the file in "write" mode. We're also tagging the opened file with a variable name using the `as` operator -- `outfile` is an arbitrary variable name that I came up with.
But then we'll use that variable name to do things to the file we've opened. In this case, we want to use the file object's `write()` method to write some content to the file.
What content? The HTML of the page we grabbed, which is accessible through the `.text` attribute.
In human words, this block of code is saying: "Open a file called `death-row-page.html` and write the HTML of thata death row page you grabbed earlier into it."
## Reading the HTML from a saved web page
At some point after you've saved your page to file, eventually you'll want to scrape it. To read the HTML into a variable, we'll use a `with` block again, but this time we'll specify "read mode" (`'r'`) and use the `read()` method instead of the `write()` method:
```
with open(HTML_FILE_NAME, 'r') as infile:
html = infile.read()
html
```
Now it's just a matter of turning that HTML into soup -- [see this notebook for more details](03.%20Parsing%20HTML%20with%20BeautifulSoup.ipynb) -- and parsing the results.
```
soup = bs4.BeautifulSoup(html, 'html.parser')
```
|
github_jupyter
|
# Task 1: Getting started with Numpy
Let's spend a few minutes just learning some of the fundamentals of Numpy. (pronounced as num-pie **not num-pee**)
### what is numpy
Numpy is a Python library that support large, multi-dimensional arrays and matrices.
Let's look at an example. Suppose we start with a little table:
| a | b | c | d | e |
| :---: | :---: | :---: | :---: | :---: |
| 0 | 1 | 2 | 3 | 4 |
|10| 11| 12 | 13 | 14|
|20| 21 | 22 | 23 | 24 |
|30 | 31 | 32 | 33 | 34 |
|40 |41 | 42 | 43 | 44 |
and I simply want to add 10 to each cell:
| a | b | c | d | e |
| :---: | :---: | :---: | :---: | :---: |
| 10 | 11 | 12 | 13 | 14 |
|20| 21| 22 | 23 | 24|
|30| 31 | 32 | 33 | 34 |
|40 | 41 | 42 | 43 | 44 |
|50 |51 | 52 | 53 | 54 |
To make things interesting, instead of a a 5 x5 array, let's make it 1,000x1,000 -- so 1 million cells!
First, let's construct it in generic Python
```
a = [[x + y * 1000 for x in range(1000)] for y in range(1000)]
```
Instead of glossing over the first code example in the course, take your time, go back, and parse it out so you understand it. Test it out and see what it looks like. For example, how would you change the example to make a 10x10 array called `a2`? execute the code here:
```
# TO DO
```
Now let's take a look at the value of a2:
```
a2
```
Now that we understand that line of code let's go on and write a function that will add 10 to each cell in our original 1000x1000 matrix.
```
def addToArr(sizeof):
for i in range(sizeof):
for j in range(sizeof):
a[i][j] = a[i][j] + 10
```
As you can see, we iterate over the array with nested for loops.
Let's take a look at how much time it takes to run that function:
```
%time addToArr(1000)
```
My results were:
CPU times: user 145 ms, sys: 0 ns, total: 145 ms
Wall time: 143 ms
So about 1/7 of a second.
### Doing in using Numpy
Now do the same using Numpy.
We can construct the array using
arr = np.arange(1000000).reshape((1000,1000))
Not sure what that line does? Numpy has great online documentation. [Documentation for np.arange](https://numpy.org/doc/stable/reference/generated/numpy.arange.html) says it "Return evenly spaced values within a given interval." Let's try it out:
```
import numpy as np
np.arange(16)
```
So `np.arange(10)` creates a matrix of 16 sequential integers. [The documentation for reshape](https://numpy.org/doc/1.18/reference/generated/numpy.reshape.html) says, as the name suggests, "Gives a new shape to an array without changing its data." Suppose we want to reshape our 1 dimensional matrix of 16 integers to a 4x4 one. we can do:
```
np.arange(16).reshape((4,4))
```
As you can see it is pretty easy to find documentation on Numpy.
Back to our example of creating a 1000x1000 matrix, we now can time how long it takes to add 10 to each cell.
%time arr = arr + 10
Let's put this all together:
```
import numpy as np
arr = np.arange(1000000).reshape((1000,1000))
%time arr = arr + 10
```
My results were
CPU times: user 1.26 ms, sys: 408 µs, total: 1.67 ms
Wall time: 1.68 ms
So, depending on your computer, somewhere around 25 to 100 times faster. **That is phenomenally faster!**
And Numpy is even fast in creating arrays:
#### the generic Python way
```
%time a = [[x + y * 1000 for x in range(1000)] for y in range(1000)]
```
My results were
CPU times: user 92.1 ms, sys: 11.5 ms, total: 104 ms
Wall time: 102 ms
#### the Numpy way
```
%time arr = np.arange(1000000).reshape((1000,1000))
```
What are your results?
<h3 style="color:red">Q1. Speed</h3>
<span style="color:red">Suppose I want to create an array with 10,000 by 10,000 cells. Then I want to add 1 to each cell. How much time does this take using generic Python arrays and using Numpy arrays?</span>
#### in Python
(be patient -- this may take a number of seconds)
#### in Numpy
### built in functions
In addition to being faster, numpy has a wide range of built in functions. So, for example, instead of you writing code to calculate the mean or sum or standard deviation of a multidimensional array you can just use numpy:
```
arr.mean()
arr.sum()
arr.std()
```
So not only is it faster, but it minimizes the code you have to write. A win, win.
Let's continue with some basics.
## numpy examined
So Numpy is a library containing a super-fast n-dimensional array object and a load of functions that can operate on those arrays. To use numpy, we must first load the library into our code and we do that with the statement:
```
import numpy as np
```
Perhaps most of you are saying "fine, fine, I know this already", but let me catch others up to speed. This is just one of several ways we can load a library into Python. We could just say:
```
import numpy
```
and everytime we need to use one of the functions built in
to numpy we would need to preface that function with `numpy` . So for example, we could create an array with
```
arr = numpy.array([1, 2, 3, 4, 5])
```
If we got tired of writing `numpy` in front of every function, instead of typing
```
import numpy
```
we could write:
```
from numpy import *
```
(where that * means 'everything' and the whole expression means import everything from the numpy library). Now we can use any numpy function without putting numpy in front of it:
```
arr = array([1, 2, 3, 4, 5])
```
This may at first seem like a good idea, but it is considered bad form by Python developers.
The solution is to use what we initially introduced:
```
import numpy as np
```
this makes `np` an alias for numpy. so now we would put *np* in front of numpy functions.
```
arr = np.array([1, 2, 3, 4, 5])
```
Of course we could use anything as an alias for numpy:
```
import numpy as myCoolSneakers
arr = myCoolSneakers.array([1, 2, 3, 4, 5])
```
But it is convention among data scientists, machine learning experts, and the cool kids to use np. One big benefit of this convention is that it makes the code you write more understandable to others and vice versa (I don't need to be scouring your code to find out what `myCoolSneakers.array` does)
## creating arrays
An Array in Numpy is called an `ndarray` for n-dimensional array. As we will see, they share some similarities with Python lists. We have already seen how to create one:
```
arr = np.array([1, 2, 3, 4, 5])
```
and to display what `arr` equals
```
arr
```
This is a one dimensional array. The position of an element in the array is called the index. The first element of the array is at index 0, the next at index 1 and so on. We can get the item at a particular index by using the syntax:
```
arr[0]
arr[3]
```
We can create a 2 dimensional array that looks like
10 20 30
40 50 60
by:
```
arr = np.array([[10, 20, 30], [40, 50, 60]])
```
and we can show the contents of that array just be using the name of the array, `arr`
```
arr
```
We don't need to name arrays `arr`, we can name them anything we want.
```
ratings = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
ratings
```
So far, we've been creating numpy arrays by using Python lists. We can make that more explicit by first creating the Python list and then using it to create the ndarray:
```
pythonArray = [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]
sweet = np.array(pythonArray)
sweet
```
We can also create an array of all zeros or all ones directly:
```
np.zeros(10)
np.ones((5, 2))
```
### indexing
Indexing elements in ndarrays works pretty much the same as it does in Python. We have already seen one example, here is another example with a one dimensional array:
```
temperatures = np.array([48, 44, 37, 35, 32, 29, 33, 36, 42])
temperatures[0]
temperatures[3]
```
and a two dimensional one:
```
sample = np.array([[10, 20, 30], [40, 50, 60]])
sample[0][1]
```
For numpy ndarrays we can also use a comma to separate the indices of multi-dimensional arrays:
```
sample[1,2]
```
And, like Python you can also get a slice of an array. First, here is the basic Python example:
```
a = [10, 20, 30, 40, 50, 60]
b = a[1:4]
b
```
and the similar numpy example:
```
aarr = np.array(a)
barr = aarr[1:4]
barr
```
### Something wacky to remember
But there is a difference between Python arrays and numpy ndarrays. If I alter the array `b` in Python the orginal `a` array is not altered:
```
b[1] = b[1] + 5
b
a
```
but if we do the same in numpy:
```
barr[1] = barr[1] + 5
barr
aarr
```
we see that the original array is altered since we modified the slice. This may seem wacky to you, or maybe it doesn't. In any case, it is something you will get used to. For now, just be aware of this.
## functions on arrays
Numpy has a wide range of array functons. Here is just a sample.
### Unary functions
#### absolute value
```
arr = np.array([-2, 12, -25, 0])
arr2 = np.abs(arr)
arr2
arr = np.array([[-2, 12], [-25, 0]])
arr2 = np.abs(arr)
arr2
```
#### square
```
arr = np.array([-1, 2, -3, 4])
arr2 = np.square(arr)
arr2
```
#### squareroot
```
arr = np.array([[4, 9], [16, 25]])
arr2 = np.sqrt(arr)
arr2
```
## Binary functions
#### add /subtract / multiply / divide
```
arr1 = np.array([[10, 20], [30, 40]])
arr2 = np.array([[1, 2], [3, 4]])
np.add(arr1, arr2)
np.subtract(arr1, arr2)
np.multiply(arr1, arr2)
np.divide(arr1, arr2)
```
#### maximum / minimum
```
arr1 = np.array([[10, 2], [3, 40]])
arr2 = np.array([[1, 20], [30, 4]])
np.maximum(arr1, arr2)
```
#### these are just examples. There are more unary and binary functions
## Numpy Uber
lets say I have Uber drivers at various intersections around Austin. I will represent that as a set of x,y coordinates.
| Driver |xPos | yPos |
| :---: | :---: | :---: |
| Ann | 4 | 5 |
| Clara | 6 | 6 |
| Dora | 3 | 1 |
| Erica | 9 | 5 |
Now I would like to find the closest driver to a customer who is at 6, 3.
And to further define *closest* I am going to use what is called **Manhattan Distance**. Roughly put, Manhattan distance is distance if you followed streets. Ann, for example, is two blocks West of our customer and two blocks north. So the Manhattan distance from Ann to our customer is `2+2` or `4`.
First, to make things easy (and because the data in a numpy array must be of the same type), I will represent the x and y positions in one numpy array and the driver names in another:
```
locations = np.array([[4, 5], [6, 6], [3, 1], [9,5]])
locations
drivers = np.array(["Ann", "Clara", "Dora", "Erica"])
```
Our customer is at
```
cust = np.array([6, 3])
```
now we are going to figure out the distance between each of our drivers and the customer
```
xydiff = locations - cust
xydiff
```
NOTE: displaying the results with `xydiff` isn't a necessary step. I just like seeing intermediate results.
Ok. now I am goint to sum the absolute values:
```
distances =np.abs(xydiff).sum(axis = 1)
distances
```
So the output is the array `[4, 3, 5, 5]` which shows that Ann is 4 away from our customer; Clara is 3 away and so on.
Now I am going to sort these using `argsort`:
```
sorted = np.argsort(distances)
sorted
```
`argsort` returns an array of sorted indices. So the element at position 1 is the smallest followed by the element at position 0 and so on.
Next, I am going to get the first element of that array (in this case 1) and find the name of the driver at that position in the `drivers` array
```
drivers[sorted[0]]
```
<h3 style="color:red">Q2. You Try</h3>
<span style="color:red">Can you put all the above in a function. that takes 3 arguments, the location array, the array containing the names of the drivers, and the array containing the location of the customer. It should return the name of the closest driver.</span>
```
def findDriver(distanceArr, driversArr, customerArr):
result = ''
### put your code here
return result
print(findDriver(locations, drivers, cust)) # this should return Clara
```
### CONGRATULATIONS
Even though this is just an intro to Numpy, I am going to throw some math at you. So far we have been looking at a two dimensional example, x and y (or North-South and East-West) and our distance formula for the distance, Dist between Ann, A and Customer C is
$$ DIST_{AC} = |A_x - C_x | + |A_y - C_y | $$
Now I am going to warp this a bit. In this example, each driver is represented by an array (as is the customer) So, Ann is represented by `[1,2]` and the customer by `[3,4]`. So Ann's 0th element is 1 and the customer's 0th element is 3. And, sorry, computer science people start counting at 0 but math people (and all other normal people) start at 1 so we can rewrite the above formula as:
$$ DIST_{AC} = |A_1 - C_1 | + |A_2 - C_2 | $$
That's the distance formula for Ann and the Customer. We can make the formula by saying the distance between and two people, let's call them *x* and *y* is
$$ DIST_{xy} = |x_1 - y_1 | + |x_2 - y_2 | $$
That is the formula for 2 dimensional Manhattan Distance. We can imagine a three dimensional case.
$$ DIST_{xy} = |x_1 - y_1 | + |x_2 - y_2 | + |x_3 - y_3 | $$
and we can generalize the formula to the n-dimensional case.
$$ DIST_{xy}=\sum_{i=1}^n |x_i - y_i| $$
Just in time for a five dimensional example:
# The Amazing 5D Music example
Guests went into a listening booth and rated the following tunes:
* [Janelle Monae Tightrope](https://www.youtube.com/watch?v=pwnefUaKCbc)
* [Major Lazer - Cold Water](https://www.youtube.com/watch?v=nBtDsQ4fhXY)
* [Tim McGraw - Humble & Kind](https://www.youtube.com/watch?v=awzNHuGqoMc)
* [Maren Morris - My Church](https://www.youtube.com/watch?v=ouWQ25O-Mcg)
* [Hailee Steinfeld - Starving](https://www.youtube.com/watch?v=xwjwCFZpdns)
Here are the results:
| Guest | Janelle Monae | Major Lazer | Tim McGraw | Maren Morris | Hailee Steinfeld|
|---|---|---|---|---|---|
| Ann | 4 | 5 | 2 | 1 | 3 |
| Ben | 3 | 1 | 5 | 4 | 2|
| Jordyn | 5 | 5 | 2 | 2 | 3|
| Sam | 4 | 1 | 4 | 4 | 1|
| Hyunseo | 1 | 1 | 5 | 4 | 1 |
| Ahmed | 4 | 5 | 3 | 3 | 1 |
So Ann, for example, really liked Major Lazer and Janelle Monae but didn't care much for Maren Morris.
Let's set up a few numpy arrays.
```
customers = np.array([[4, 5, 2, 1, 3],
[3, 1, 5, 4, 2],
[5, 5, 2, 2, 3],
[4, 1, 4, 4, 1],
[1, 1, 5, 4, 1],
[4, 5, 3, 3, 1]])
customerNames = np.array(["Ann", "Ben", 'Jordyn', "Sam", "Hyunseo", "Ahmed"])
```
Now let's set up a few new customers:
```
mikaela = np.array([3, 2, 4, 5, 4])
brandon = np.array([4, 5, 1, 2, 3])
```
Now we would like to determine which of our current customers is closest to Mikaela and which to Brandon.
<h3 style="color:red">Q3. You Try</h3>
<span style="color:red">Can you write a function findClosest that takes 3 arguments: customers, customerNames, and an array representing one customer's ratings and returns the name of the closest customer?</span>
Let's break this down a bit.
1. Which line in the Numpy Uber section above will create a new array which is the result of subtracting the Mikaela array from each row of the customers array resulting in
```
array([[ 1, 3, -2, -4, -1],
[ 0, -1, 1, -1, -2],
[ 2, 3, -2, -3, -1],
[ 1, -1, 0, -1, -3],
[-2, -1, 1, -1, -3],
[ 1, 3, -1, -2, -3]])
```
```
# TODO
```
2. Which line above will take the array you created and generate a single integer distance for each row representing how far away that row is from Mikaela? The results will look like:
```
array([11, 5, 11, 6, 8, 10])
```
```
# TO DO
```
Finally, we want a sorted array of indices, the zeroth element of that array will be the closest row to Mikaela, the next element will be the next closest and so on. The result should be
```
array([1, 3, 4, 5, 0, 2])
```
```
# TO DO
```
Finally we need the name of the person that is the closest.
```
# TO DO
```
Okay, time to put it all together. Can you combine all the code you wrote above to finish the following function? So x is the new person and we want to find the closest customer to x.
```
def findClosest(customers, customerNames, x):
# TO DO
return ''
print(findClosest(customers, customerNames, mikaela)) # Should print Ben
print(findClosest(customers, customerNames, brandon)) # Should print Ann
```
## Numpy Amazon
We are going to start with the same array we did way up above:
| Drone |xPos | yPos |
| :---: | :---: | :---: |
| wing_1a | 4 | 5 |
| wing_2a | 6 | 6 |
| wing_3a | 3 | 1 |
| wing_4a | 9 | 5 |
But this time, instead of Uber drivers, think of these as positions of [Alphabet's Wing delivery drones](https://wing.com/).
Now we would like to find the closest drone to a customer who is at 7, 1.
With the previous example we used Manhattan Distance. With drones, we can compute the distance as the crow flies -- or Euclidean Distance. We probably learned how to do this way back in 7th grade when we learned the Pythagorean Theorem which states:
$$c^2 = a^2 + b^2$$
Where *c* is the hypotenuse and *a* and *b* are the two other sides. So, if we want to find *c*:
$$c = \sqrt{a^2 + b^2}$$
If we want to find the distance between the drone and a customer, *x* and *y* in the formula becomes
$$Dist_{xy} = \sqrt{(x_1-y_1)^2 + (x_2-y_2)^2}$$
and for `wing_1a` who is at `[4,5]` and our customer who is at `[7,1]` then the formula becomes:
$$Dist_{xy} = \sqrt{(x_1-y_1)^2 + (x_2-y_2)^2} = \sqrt{(4-7)^2 + (5-1)^2} =\sqrt{-3^2 + 4^2} = \sqrt{9 + 16} = \sqrt{25} = 5$$
Sweet! And to generalize this distance formula:
$$Dist_{xy} = \sqrt{(x_1-y_1)^2 + (x_2-y_2)^2}$$
to n-dimensions:
$$Dist_{xy} = \sum_{i=1}^n{\sqrt{(x_i-y_i)^2}}$$
<h4 style="color:red">Q4. You Try</h3>
<span style="color:red">Can you write a function euclidean that takes 3 arguments: droneLocation, droneNames, and an array representing one customer's position and returns the name of the closest drone?</span>
First, a helpful hint:
```
arr = np.array([-1, 2, -3, 4])
arr2 = np.square(arr)
arr2
locations = np.array([[4, 5], [6, 6], [3, 1], [9,5]])
drivers = np.array(["wing_1a", "wing_2a", "wing_3a", "wing_4a"])
cust = np.array([6, 3])
def euclidean(droneLocation, droneNames, x):
result = ''
### your code here
return result
euclidean(locations, drivers, cust)
```
<h4 style="color:red">Q5. You Try</h3>
<span style="color:red">try your code on the "Amazing 5D Music example. Does it return the same person or a different one?"</span>
```
#TBD
```
|
github_jupyter
|
# Intake / Pangeo Catalog: Making It Easier To Consume Earth’s Climate and Weather Data
Anderson Banihirwe ([email protected]), Charles Blackmon-Luca ([email protected]), Ryan Abernathey ([email protected]), Joseph Hamman ([email protected])
- NCAR, Boulder, CO, USA
- Columbia University, Palisades, NY, USA
[2020 EarthCube Annual Meeting](https://www.earthcube.org/EC2020) ID: 133
## Introduction
Computer simulations of the Earth’s climate and weather generate huge amounts of data. These data are often persisted on high-performance computing (HPC) systems or in the cloud across multiple data assets in a variety of formats (netCDF, Zarr, etc.).
Finding, investigating, and loading these data assets into compute-ready data containers costs time and effort.
The user should know what data are available and their associated metadata, preferably before loading a specific data asset and analyzing it.
In this notebook, we demonstrate [intake-esm](https://github.com/NCAR/intake-esm), a Python package and an [intake](https://github.com/intake/intake) plugin with aims of facilitating:
- the discovery of earth's climate and weather datasets.
- the ingestion of these datasets into [xarray](https://github.com/pydata/xarray) dataset containers.
The common/popular starting point for finding and investigating large datasets is with a data catalog.
A *data catalog* is a collection of metadata, combined with search tools, that helps data analysts and other users to find the data they need.
For a user to take full advantage of intake-esm, they must point it to an *Earth System Model (ESM) data catalog*.
This is a JSON-formatted file that conforms to the ESM collection specification.
## ESM Collection Specification
The [ESM collection specification](https://github.com/NCAR/esm-collection-spec) provides a machine-readable format for describing a wide range of climate and weather datasets, with a goal of making it easier to index and discover climate and weather data assets.
An asset is any netCDF/HDF file or Zarr store that contains relevant data.
An ESM data catalog serves as an inventory of available data, and provides information to explore the existing data assets.
Additionally, an ESM catalog can contain information on how to aggregate compatible groups of data assets into singular xarray datasets.
## Use Case: CMIP6 hosted on Google Cloud
The Coupled Model Intercomparison Project (CMIP) is an international collaborative effort to improve the knowledge about climate change and its impacts on the Earth System and on our society.
[CMIP began in 1995](https://www.wcrp-climate.org/wgcm-cmip), and today we are in its sixth phase (CMIP6).
The CMIP6 data archive consists of data models created across approximately 30 working groups and 1,000 researchers investigating the urgent environmental problem of climate change, and will provide a wealth of information for the next Assessment Report (AR6) of the [Intergovernmental Panel on Climate Change](https://www.ipcc.ch/) (IPCC).
Last year, Pangeo partnered with Google Cloud to bring CMIP6 climate data to Google Cloud’s Public Datasets program.
You can read more about this process [here](https://cloud.google.com/blog/products/data-analytics/new-climate-model-data-now-google-public-datasets).
For the remainder of this section, we will demonstrate intake-esm's features using the ESM data catalog for the CMIP6 data stored on Google Cloud Storage.
This catalog resides [in a dedicated CMIP6 bucket](https://storage.googleapis.com/cmip6/pangeo-cmip6.json).
### Loading an ESM data catalog
To load an ESM data catalog with intake-esm, the user must provide a valid ESM data catalog as input:
```
import warnings
warnings.filterwarnings("ignore")
import intake
col = intake.open_esm_datastore('https://storage.googleapis.com/cmip6/pangeo-cmip6.json')
col
```
The summary above tells us that this catalog contains over 268,000 data assets.
We can get more information on the individual data assets contained in the catalog by calling the underlying dataframe created when it is initialized:
```
col.df.head()
```
The first data asset listed in the catalog contains:
- the ambient aerosol optical thickness at 550nm (`variable_id='od550aer'`), as a function of latitude, longitude, time,
- in an individual climate model experiment with the Taiwan Earth System Model 1.0 model (`source_id='TaiESM1'`),
- forced by the *Historical transient with SSTs prescribed from historical* experiment (`experiment_id='histSST'`),
- developed by the Taiwan Research Center for Environmental Changes (`instution_id='AS-RCEC'`),
- run as part of the Aerosols and Chemistry Model Intercomparison Project (`activity_id='AerChemMIP'`)
And is located in Google Cloud Storage at `gs://cmip6/AerChemMIP/AS-RCEC/TaiESM1/histSST/r1i1p1f1/AERmon/od550aer/gn/`.
Note: the amount of details provided in the catalog is determined by the data provider who builds the catalog.
### Searching for datasets
After exploring the [CMIP6 controlled vocabulary](https://github.com/WCRP-CMIP/CMIP6_CVs), it’s straightforward to get the data assets you want using intake-esm's `search()` method. In the example below, we are are going to search for the following:
- variables: `tas` which stands for near-surface air temperature
- experiments: `['historical', 'ssp245', 'ssp585']`:
- `historical`: all forcing of the recent past.
- `ssp245`: update of [RCP4.5](https://en.wikipedia.org/wiki/Representative_Concentration_Pathway) based on SSP2.
- `ssp585`: emission-driven [RCP8.5](https://en.wikipedia.org/wiki/Representative_Concentration_Pathway) based on SSP5.
- table_id: `Amon` which stands for Monthly atmospheric data.
- grid_label: `gr` which stands for regridded data reported on the data provider's preferred target grid.
For more details on the CMIP6 vocabulary, please check this [website](http://clipc-services.ceda.ac.uk/dreq/index.html).
```
# form query dictionary
query = dict(experiment_id=['historical', 'ssp245', 'ssp585'],
table_id='Amon',
variable_id=['tas'],
member_id = 'r1i1p1f1',
grid_label='gr')
# subset catalog and get some metrics grouped by 'source_id'
col_subset = col.search(require_all_on=['source_id'], **query)
col_subset.df.groupby('source_id')[['experiment_id', 'variable_id', 'table_id']].nunique()
```
### Loading datasets
Once you've identified data assets of interest, you can load them into xarray dataset containers using the `to_dataset_dict()` method. Invoking this method yields a Python dictionary of high-level aggregated xarray datasets.
The logic for merging/concatenating the query results into higher level xarray datasets is provided in the input JSON file, under `aggregation_control`:
```json
"aggregation_control": {
"variable_column_name": "variable_id",
"groupby_attrs": [
"activity_id",
"institution_id",
"source_id",
"experiment_id",
"table_id",
"grid_label"
],
"aggregations": [{
"type": "union",
"attribute_name": "variable_id"
},
{
"type": "join_new",
"attribute_name": "member_id",
"options": {
"coords": "minimal",
"compat": "override"
}
},
{
"type": "join_new",
"attribute_name": "dcpp_init_year",
"options": {
"coords": "minimal",
"compat": "override"
}
}
]
}
```
Though these aggregation specifications are sufficient to merge individual data assets into xarray datasets, sometimes additional arguments must be provided depending on the format of the data assets.
For example, Zarr-based assets can be loaded with the option `consolidated=True`, which relies on a consolidated metadata file to describe the assets with minimal data egress.
```
dsets = col_subset.to_dataset_dict(zarr_kwargs={'consolidated': True}, storage_options={'token': 'anon'})
# list all merged datasets
[key for key in dsets.keys()]
```
When the datasets have finished loading, we can extract any of them like we would a value in a Python dictionary:
```
ds = dsets['ScenarioMIP.THU.CIESM.ssp585.Amon.gr']
ds
# Let’s create a quick plot for a slice of the data:
ds.tas.isel(time=range(1, 1000, 90))\
.plot(col="time", col_wrap=4, robust=True)
```
## Pangeo Catalog
Pangeo Catalog is an open-source project to enumerate and organize cloud-optimized climate data stored across a variety of providers.
In addition to offering various useful climate datasets in a consolidated location, the project also serves as a means of accessing public ESM data catalogs.
### Accessing catalogs using Python
At the core of the project is a [GitHub repository](https://github.com/pangeo-data/pangeo-datastore) containing several static intake catalogs in the form of YAML files.
Thanks to plugins like intake-esm and [intake-xarray](https://github.com/intake/intake-xarray), these catalogs can contain links to ESM data catalogs or data assets that can be loaded into xarray datasets, along with the arguments required to load them.
By editing these files using Git-based version control, anyone is free to contribute a dataset supported by the available [intake plugins](https://intake.readthedocs.io/en/latest/plugin-directory.html).
Users can then browse these catalogs by providing their associated URL as input into intake's `open_catalog()`; their tree-like structure allows a user to explore their entirety by simply opening the [root catalog](https://github.com/pangeo-data/pangeo-datastore/blob/master/intake-catalogs/master.yaml) and recursively walking through it:
```
cat = intake.open_catalog('https://raw.githubusercontent.com/pangeo-data/pangeo-datastore/master/intake-catalogs/master.yaml')
entries = cat.walk(depth=5)
[key for key in entries.keys()]
```
The catalogs can also be explored using intake's own `search()` method:
```
cat_subset = cat.search('cmip6')
list(cat_subset)
```
Once we have found a dataset or collection we want to explore, we can do so without the need of any user inputted argument:
```
cat.climate.tracmip()
```
### Accessing catalogs using catalog.pangeo.io
For those who don't want to initialize a Python environmemt to explore the catalogs, [catalog.pangeo.io](https://catalog.pangeo.io/) offers a means of viewing them from a standalone web application.
The website directly mirrors the catalogs in the GitHub repository, with previews of each dataset or collection loaded on the fly:
<img src="images/pangeo-catalog.png" alt="Example of an intake-esm collection on catalog.pangeo.io" width="1000">
From here, users can view the JSON input associated with an ESM collection and sort/subset its contents:
<img src="images/esm-demo.gif" alt="Example of an intake-esm collection on catalog.pangeo.io" width="800">
## Conclusion
With intake-esm, much of the toil associated with discovering, loading, and consolidating data assets can be eliminated.
In addition to making computations on huge datasets more accessible to the scientific community, the package also promotes reproducibility by providing simple methodology to create consistent datasets.
Coupled with Pangeo Catalog (which in itself is powered by intake), intake-esm gives climate scientists the means to create and distribute large data collections with instructions on how to use them essentially written into their ESM specifications.
There is still much work to be done with respect to intake-esm and Pangeo Catalog; in particular, goals include:
- Merging ESM collection specifications into [SpatioTemporal Asset Catalog (STAC) specification](https://stacspec.org/) to offer a more universal specification standard
- Development of tools to verify and describe catalogued data on a regular basis
- Restructuring of catalogs to allow subsetting by cloud provider region
[Please reach out](https://discourse.pangeo.io/) if you are interested in participating in any way.
## References
- [intake-esm documentation](https://intake-esm.readthedocs.io/en/latest/)
- [intake documentation](https://intake.readthedocs.io/en/latest/)
- [Pangeo Catalog on GitHub](https://github.com/pangeo-data/pangeo-datastore)
- [Pangeo documentation](http://pangeo.io/)
- [A list of existing, "known" catalogs](https://intake-esm.readthedocs.io/en/latest/faq.html#is-there-a-list-of-existing-catalogs)
|
github_jupyter
|
```
import matplotlib.pyplot as plt
import numpy as np
import pyart
import scipy
radar = pyart.io.read('/home/zsherman/cmac_test_radar.nc')
radar.fields.keys()
max_lat = 37
min_lat = 36
min_lon = -98.3
max_lon = -97
lal = np.arange(min_lat, max_lat, .2)
lol = np.arange(min_lon, max_lon, .2)
display = pyart.graph.RadarMapDisplay(radar)
fig = plt.figure(figsize=[10, 8])
display.plot_ppi_map('reflectivity', sweep=0, resolution='c',
vmin=-8, vmax=64, mask_outside=False,
cmap=pyart.graph.cm.NWSRef,
min_lat=min_lat, min_lon=min_lon,
max_lat=max_lat, max_lon=max_lon,
lat_lines=lal, lon_lines=lol)
# plt.savefig('')
print(radar.fields['gate_id']['notes'])
cat_dict = {}
for pair_str in radar.fields['gate_id']['notes'].split(','):
print(pair_str)
cat_dict.update(
{pair_str.split(':')[1]:int(pair_str.split(':')[0])})
happy_gates = pyart.correct.GateFilter(radar)
happy_gates.exclude_all()
happy_gates.include_equal('gate_id', cat_dict['rain'])
happy_gates.include_equal('gate_id', cat_dict['melting'])
happy_gates.include_equal('gate_id', cat_dict['snow'])
max_lat = 37
min_lat = 36
min_lon = -98.3
max_lon = -97
lal = np.arange(min_lat, max_lat, .2)
lol = np.arange(min_lon, max_lon, .2)
display = pyart.graph.RadarMapDisplay(radar)
fig = plt.figure(figsize=[10, 8])
display.plot_ppi_map('reflectivity', sweep=1, resolution='c',
vmin=-8, vmax=64, mask_outside=False,
cmap=pyart.graph.cm.NWSRef,
min_lat=min_lat, min_lon=min_lon,
max_lat=max_lat, max_lon=max_lon,
lat_lines=lal, lon_lines=lol,
gatefilter=happy_gates)
# plt.savefig('')
grids1 = pyart.map.grid_from_radars(
(radar, ), grid_shape=(46, 251, 251),
grid_limits=((0, 15000.0), (-50000, 50000), (-50000, 50000)),
fields=list(radar.fields.keys()), gridding_algo="map_gates_to_grid",
weighting_function='BARNES', gatefilters=(happy_gates, ),
map_roi=True, toa=17000.0, copy_field_data=True, algorithm='kd_tree',
leafsize=10., roi_func='dist_beam', constant_roi=500.,
z_factor=0.05, xy_factor=0.02, min_radius=500.0,
h_factor=1.0, nb=1.5, bsp=1.0,)
display = pyart.graph.GridMapDisplay(grids1)
fig = plt.figure(figsize=[15, 7])
# Panel sizes.
map_panel_axes = [0.05, 0.05, .4, .80]
x_cut_panel_axes = [0.55, 0.10, .4, .25]
y_cut_panel_axes = [0.55, 0.50, .4, .25]
# Parameters.
level = 3
vmin = -8
vmax = 64
lat = 36.5
lon = -97.7
# Panel 1, basemap, radar reflectivity and NARR overlay.
ax1 = fig.add_axes(map_panel_axes)
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('reflectivity', level=level, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
display.plot_crosshairs(lon=lon, lat=lat)
# Panel 2, longitude slice.
ax2 = fig.add_axes(x_cut_panel_axes)
display.plot_longitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
ax2.set_ylim([0, 15])
ax2.set_xlim([-50, 50])
ax2.set_xlabel('Distance from SGP CF (km)')
# Panel 3, latitude slice.
ax3 = fig.add_axes(y_cut_panel_axes)
ax3.set_ylim([0, 15])
ax3.set_xlim([-50, 50])
display.plot_latitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
# plt.savefig('')
grids2 = pyart.map.grid_from_radars(
(radar, ), grid_shape=(46, 251, 251),
grid_limits=((0, 15000.0), (-50000, 50000), (-50000, 50000)),
fields=list(radar.fields.keys()), gridding_algo="map_gates_to_grid",
weighting_function='BARNES', gatefilters=(happy_gates, ),
map_roi=True, toa=17000.0, copy_field_data=True, algorithm='kd_tree',
leafsize=10., roi_func='dist_beam', constant_roi=500.,
z_factor=0.05, xy_factor=0.02, min_radius=500.0,
h_factor=1.0, nb=1.5, bsp=1.0,)
display = pyart.graph.GridMapDisplay(grids)
fig = plt.figure(figsize=[15, 7])
# Panel sizes.
map_panel_axes = [0.05, 0.05, .4, .80]
x_cut_panel_axes = [0.55, 0.10, .4, .25]
y_cut_panel_axes = [0.55, 0.50, .4, .25]
# Parameters.
level = 3
vmin = -8
vmax = 64
lat = 36.5
lon = -97.7
# Panel 1, basemap, radar reflectivity and NARR overlay.
ax1 = fig.add_axes(map_panel_axes)
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('reflectivity', level=level, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
display.plot_crosshairs(lon=lon, lat=lat)
# Panel 2, longitude slice.
ax2 = fig.add_axes(x_cut_panel_axes)
display.plot_longitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
ax2.set_ylim([0, 15])
ax2.set_xlim([-50, 50])
ax2.set_xlabel('Distance from SGP CF (km)')
# Panel 3, latitude slice.
ax3 = fig.add_axes(y_cut_panel_axes)
ax3.set_ylim([0, 15])
ax3.set_xlim([-50, 50])
display.plot_latitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
# plt.savefig('')
grids3 = pyart.map.grid_from_radars(
(radar, ), grid_shape=(46, 251, 251),
grid_limits=((0, 15000.0), (-50000, 50000), (-50000, 50000)),
fields=list(radar.fields.keys()), gridding_algo="map_gates_to_grid",
weighting_function='BARNES', gatefilters=(happy_gates, ),
map_roi=True, toa=17000.0, copy_field_data=True, algorithm='kd_tree',
leafsize=10., roi_func='dist_beam', constant_roi=500.,
z_factor=0.05, xy_factor=0.02, min_radius=500.0,
h_factor=1.0, nb=1.5, bsp=1.0,)
display = pyart.graph.GridMapDisplay(grids)
fig = plt.figure(figsize=[15, 7])
# Panel sizes.
map_panel_axes = [0.05, 0.05, .4, .80]
x_cut_panel_axes = [0.55, 0.10, .4, .25]
y_cut_panel_axes = [0.55, 0.50, .4, .25]
# Parameters.
level = 3
vmin = -8
vmax = 64
lat = 36.5
lon = -97.7
# Panel 1, basemap, radar reflectivity and NARR overlay.
ax1 = fig.add_axes(map_panel_axes)
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('reflectivity', level=level, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
display.plot_crosshairs(lon=lon, lat=lat)
# Panel 2, longitude slice.
ax2 = fig.add_axes(x_cut_panel_axes)
display.plot_longitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
ax2.set_ylim([0, 15])
ax2.set_xlim([-50, 50])
ax2.set_xlabel('Distance from SGP CF (km)')
# Panel 3, latitude slice.
ax3 = fig.add_axes(y_cut_panel_axes)
ax3.set_ylim([0, 15])
ax3.set_xlim([-50, 50])
display.plot_latitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
# plt.savefig('')
grids4 = pyart.map.grid_from_radars(
(radar, ), grid_shape=(46, 251, 251),
grid_limits=((0, 15000.0), (-50000, 50000), (-50000, 50000)),
fields=list(radar.fields.keys()), gridding_algo="map_gates_to_grid",
weighting_function='BARNES', gatefilters=(happy_gates, ),
map_roi=True, toa=17000.0, copy_field_data=True, algorithm='kd_tree',
leafsize=10., roi_func='dist_beam', constant_roi=500.,
z_factor=0.05, xy_factor=0.02, min_radius=500.0,
h_factor=1.0, nb=1.5, bsp=1.0,)
display = pyart.graph.GridMapDisplay(grids)
fig = plt.figure(figsize=[15, 7])
# Panel sizes.
map_panel_axes = [0.05, 0.05, .4, .80]
x_cut_panel_axes = [0.55, 0.10, .4, .25]
y_cut_panel_axes = [0.55, 0.50, .4, .25]
# Parameters.
level = 3
vmin = -8
vmax = 64
lat = 36.5
lon = -97.7
# Panel 1, basemap, radar reflectivity and NARR overlay.
ax1 = fig.add_axes(map_panel_axes)
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('reflectivity', level=level, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
display.plot_crosshairs(lon=lon, lat=lat)
# Panel 2, longitude slice.
ax2 = fig.add_axes(x_cut_panel_axes)
display.plot_longitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
ax2.set_ylim([0, 15])
ax2.set_xlim([-50, 50])
ax2.set_xlabel('Distance from SGP CF (km)')
# Panel 3, latitude slice.
ax3 = fig.add_axes(y_cut_panel_axes)
ax3.set_ylim([0, 15])
ax3.set_xlim([-50, 50])
display.plot_latitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
# plt.savefig('')
```
|
github_jupyter
|
# Exploring Text Data (2)
## PyConUK talk abstract
Data set of abstracts for the PyConUK 2016 talks (retrieved 14th Sept 2016 from https://github.com/PyconUK/2016.pyconuk.org)
The data can be found in `../data/pyconuk2016/{keynotes,workshops,talks}/*`
There are 101 abstracts
## Load the data
Firstly, we load all the data into the `documents` dictionary
We also merge the documents into one big string, `corpus_all_in_one`, for convenience
```
import os
data_dir = os.path.join('..', 'data', 'pyconuk2016')
talk_types = ['keynotes', 'workshops', 'talks']
all_talk_files = [os.path.join(data_dir, talk_type, fname)
for talk_type in talk_types
for fname in os.listdir(os.path.join(data_dir, talk_type))]
documents = {}
for talk_fname in all_talk_files:
bname = os.path.basename(talk_fname)
talk_title = os.path.splitext(bname)[0]
with open(talk_fname, 'r') as f:
content = f.read()
documents[talk_title] = content
corpus_all_in_one = ' '.join([doc for doc in documents.values()])
print("Number of talks: {}".format(len(all_talk_files)))
print("Corpus size (char): {}".format(len(corpus_all_in_one)))
%matplotlib inline
import matplotlib.pyplot as plt
from wordcloud import WordCloud
cloud = WordCloud(max_words=100)
cloud.generate_from_text(corpus_all_in_one)
plt.figure(figsize=(12,8))
plt.imshow(cloud)
plt.axis('off')
plt.show()
all_talk_files[0]
%cat {all_talk_files[0]}
# For a list of magics type:
# %lsmagic
documents = {}
for talk_fname in all_talk_files:
bname = os.path.basename(talk_fname)
talk_title = os.path.splitext(bname)[0]
with open(talk_fname, 'r') as f:
content = ""
for line in f:
if line.startswith('title:'):
line = line[6:]
if line.startswith('subtitle:') \
or line.startswith('speaker:') \
or line.startswith('---'):
continue
content += line
documents[talk_title] = content
corpus_all_in_one = ' '.join([doc for doc in documents.values()])
%matplotlib inline
import matplotlib.pyplot as plt
from wordcloud import WordCloud
cloud = WordCloud(max_words=100)
cloud.generate_from_text(corpus_all_in_one)
plt.figure(figsize=(12,8))
plt.imshow(cloud)
plt.axis('off')
plt.show()
from collections import Counter
import string
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
stop_list = stopwords.words('english') + list(string.punctuation)
document_frequency = Counter()
for talk_id, content in documents.items():
try: # py3
tokens = word_tokenize(content)
except UnicodeDecodeError: # py27
tokens = word_tokenize(content.decode('utf-8'))
unique_tokens = [token.lower() for token in set(tokens)
if token.lower() not in stop_list]
document_frequency.update(unique_tokens)
for word, freq in document_frequency.most_common(20):
print("{}\t{}".format(word, freq))
# print(stop_list)
for item in ['will', "'ll", 'll']:
print("{} in stop_list == {}".format(item, item in stop_list))
from nltk import ngrams
try:
all_tokens = [t for t in word_tokenize(corpus_all_in_one)]
except UnicodeDecodeError:
all_tokens = [t for t in word_tokenize(corpus_all_in_one.decode('utf-8'))]
bigrams = ngrams(all_tokens, 2)
trigrams = ngrams(all_tokens, 3)
bi_count = Counter(bigrams)
tri_count = Counter(trigrams)
for phrase, freq in bi_count.most_common(20):
print("{}\t{}".format(phrase, freq))
for phrase, freq in tri_count.most_common(20):
print("{}\t{}".format(phrase, freq))
```
## Term Frequency (TF)
TF provides a weight of a term within a document, based on the term frequency
TF(term, doc) = count(term in doc)
TF(term, doc) = count(term in doc) / len(doc)
## Inverse Document Frequency (IDF)
IDF provides a weight of a term across the collection, based on the document frequency of such term
IDF(term) = log( N / DF(term) )
IDF(term) = log( 1 + N / DF(term) )
## Introducing sklearn
So far, we have used some homemade implementation to count words
What if we need something more involved?
sklearn (http://scikit-learn.org/) is one of the main libraries for Machine Learning in Python
With an easy-to-use interface, it provides support for a variety of Machine Learning models
We're going to use it to tackle a Text Classification problem
```
from random import randint
winner = randint(1, 36)
print("And the winner is ... {}".format(winner))
from nltk import pos_tag
from nltk.tokenize import word_tokenize
s = "The quick brown fox juped over the dog"
tokens = word_tokenize(s)
tokens
pos_tag(tokens)
```
|
github_jupyter
|
# Explicit Feedback Neural Recommender Systems
Goals:
- Understand recommender data
- Build different models architectures using Keras
- Retrieve Embeddings and visualize them
- Add metadata information as input to the model
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os.path as op
from zipfile import ZipFile
try:
from urllib.request import urlretrieve
except ImportError: # Python 2 compat
from urllib import urlretrieve
ML_100K_URL = "http://files.grouplens.org/datasets/movielens/ml-100k.zip"
ML_100K_FILENAME = ML_100K_URL.rsplit('/', 1)[1]
ML_100K_FOLDER = 'ml-100k'
if not op.exists(ML_100K_FILENAME):
print('Downloading %s to %s...' % (ML_100K_URL, ML_100K_FILENAME))
urlretrieve(ML_100K_URL, ML_100K_FILENAME)
if not op.exists(ML_100K_FOLDER):
print('Extracting %s to %s...' % (ML_100K_FILENAME, ML_100K_FOLDER))
ZipFile(ML_100K_FILENAME).extractall('.')
```
### Ratings file
Each line contains a rated movie:
- a user
- an item
- a rating from 1 to 5 stars
```
import pandas as pd
raw_ratings = pd.read_csv(op.join(ML_100K_FOLDER, 'u.data'), sep='\t',
names=["user_id", "item_id", "rating", "timestamp"])
raw_ratings.head()
```
### Item metadata file
The item metadata file contains metadata like the name of the movie or the date it was released. The movies file contains columns indicating the movie's genres. Let's only load the first five columns of the file with `usecols`.
```
m_cols = ['item_id', 'title', 'release_date', 'video_release_date', 'imdb_url']
items = pd.read_csv(op.join(ML_100K_FOLDER, 'u.item'), sep='|',
names=m_cols, usecols=range(5), encoding='latin-1')
items.head()
```
Let's write a bit of Python preprocessing code to extract the release year as an integer value:
```
def extract_year(release_date):
if hasattr(release_date, 'split'):
components = release_date.split('-')
if len(components) == 3:
return int(components[2])
# Missing value marker
return 1920
items['release_year'] = items['release_date'].map(extract_year)
items.hist('release_year', bins=50);
```
Enrich the raw ratings data with the collected items metadata:
```
all_ratings = pd.merge(items, raw_ratings)
all_ratings.head()
```
### Data preprocessing
To understand well the distribution of the data, the following statistics are computed:
- the number of users
- the number of items
- the rating distribution
- the popularity of each movie
```
max_user_id = all_ratings['user_id'].max()
max_user_id
max_item_id = all_ratings['item_id'].max()
max_item_id
all_ratings['rating'].describe()
```
Let's do a bit more pandas magic compute the popularity of each movie (number of ratings):
```
popularity = all_ratings.groupby('item_id').size().reset_index(name='popularity')
items = pd.merge(popularity, items)
items.nlargest(10, 'popularity')
```
Enrich the ratings data with the popularity as an additional metadata.
```
all_ratings = pd.merge(popularity, all_ratings)
all_ratings.head()
```
Later in the analysis we will assume that this popularity does not come from the ratings themselves but from an external metadata, e.g. box office numbers in the month after the release in movie theaters.
Let's split the enriched data in a train / test split to make it possible to do predictive modeling:
```
from sklearn.model_selection import train_test_split
ratings_train, ratings_test = train_test_split(
all_ratings, test_size=0.2, random_state=0)
user_id_train = np.array(ratings_train['user_id'])
item_id_train = np.array(ratings_train['item_id'])
rating_train = np.array(ratings_train['rating'])
user_id_test = np.array(ratings_test['user_id'])
item_id_test = np.array(ratings_test['item_id'])
rating_test = np.array(ratings_test['rating'])
```
# Explicit feedback: supervised ratings prediction
For each pair of (user, item) try to predict the rating the user would give to the item.
This is the classical setup for building recommender systems from offline data with explicit supervision signal.
## Predictive ratings as a regression problem
The following code implements the following architecture:
<img src="images/rec_archi_1.svg" style="width: 600px;" />
```
from tensorflow.keras.layers import Embedding, Flatten, Dense, Dropout
from tensorflow.keras.layers import Dot
from tensorflow.keras.models import Model
# For each sample we input the integer identifiers
# of a single user and a single item
class RegressionModel(Model):
def __init__(self, embedding_size, max_user_id, max_item_id):
super().__init__()
self.user_embedding = Embedding(output_dim=embedding_size, input_dim=max_user_id + 1,
input_length=1, name='user_embedding')
self.item_embedding = Embedding(output_dim=embedding_size, input_dim=max_item_id + 1,
input_length=1, name='item_embedding')
# The following two layers don't have parameters.
self.flatten = Flatten()
self.dot = Dot(axes=1)
def call(self, inputs):
user_inputs = inputs[0]
item_inputs = inputs[1]
user_vecs = self.flatten(self.user_embedding(user_inputs))
item_vecs = self.flatten(self.item_embedding(item_inputs))
y = self.dot([user_vecs, item_vecs])
return y
model = RegressionModel(30, max_user_id, max_item_id)
model.compile(optimizer='adam', loss='mae')
# Useful for debugging the output shape of model
initial_train_preds = model.predict([user_id_train, item_id_train])
initial_train_preds.shape
```
### Model error
Using `initial_train_preds`, compute the model errors:
- mean absolute error
- mean squared error
Converting a pandas Series to numpy array is usually implicit, but you may use `rating_train.values` to do so explicitly. Be sure to monitor the shapes of each object you deal with by using `object.shape`.
```
# %load solutions/compute_errors.py
squared_differences = np.square(initial_train_preds[:,0] - rating_train)
absolute_differences = np.abs(initial_train_preds[:,0] - rating_train)
print("Random init MSE: %0.3f" % np.mean(squared_differences))
print("Random init MAE: %0.3f" % np.mean(absolute_differences))
# You may also use sklearn metrics to do so using scikit-learn:
from sklearn.metrics import mean_squared_error, mean_absolute_error
print("Random init MSE: %0.3f" % mean_squared_error(initial_train_preds, rating_train))
print("Random init MAE: %0.3f" % mean_absolute_error(initial_train_preds, rating_train))
```
### Monitoring runs
Keras enables to monitor various variables during training.
`history.history` returned by the `model.fit` function is a dictionary
containing the `'loss'` and validation loss `'val_loss'` after each epoch
```
%%time
# Training the model
history = model.fit([user_id_train, item_id_train], rating_train,
batch_size=64, epochs=6, validation_split=0.1,
shuffle=True)
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.ylim(0, 2)
plt.legend(loc='best')
plt.title('Loss');
```
**Questions**:
- Why is the train loss higher than the first loss in the first few epochs?
- Why is Keras not computing the train loss on the full training set at the end of each epoch as it does on the validation set?
Now that the model is trained, the model MSE and MAE look nicer:
```
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
test_preds = model.predict([user_id_test, item_id_test])
print("Final test MSE: %0.3f" % mean_squared_error(test_preds, rating_test))
print("Final test MAE: %0.3f" % mean_absolute_error(test_preds, rating_test))
train_preds = model.predict([user_id_train, item_id_train])
print("Final train MSE: %0.3f" % mean_squared_error(train_preds, rating_train))
print("Final train MAE: %0.3f" % mean_absolute_error(train_preds, rating_train))
```
## A Deep recommender model
Using a similar framework as previously, the following deep model described in the course was built (with only two fully connected)
<img src="images/rec_archi_2.svg" style="width: 600px;" />
To build this model we will need a new kind of layer:
```
from tensorflow.keras.layers import Concatenate
```
### Exercise
- The following code has **4 errors** that prevent it from working correctly. **Correct them and explain** why they are critical.
```
# For each sample we input the integer identifiers
# of a single user and a single item
class DeepRegressionModel(Model):
def __init__(self, embedding_size, max_user_id, max_item_id):
super().__init__()
self.user_embedding = Embedding(output_dim=embedding_size, input_dim=max_user_id + 1,
input_length=1, name='user_embedding')
self.item_embedding = Embedding(output_dim=embedding_size, input_dim=max_item_id + 1,
input_length=1, name='item_embedding')
# The following two layers don't have parameters.
self.flatten = Flatten()
self.concat = Concatenate()
self.dropout = Dropout(0.99)
self.dense1 = Dense(64, activation="relu")
self.dense2 = Dense(2, activation="tanh")
def call(self, inputs):
user_inputs = inputs[0]
item_inputs = inputs[1]
user_vecs = self.flatten(self.user_embedding(user_inputs))
item_vecs = self.flatten(self.item_embedding(item_inputs))
input_vecs = self.concat([user_vecs, item_vecs])
y = self.dropout(input_vecs)
y = self.dense1(y)
y = self.dense2(y)
return y
model = DeepRegressionModel(30, max_user_id, max_item_id)
model.compile(optimizer='adam', loss='binary_crossentropy')
initial_train_preds = model.predict([user_id_train, item_id_train])
# %load solutions/deep_explicit_feedback_recsys.py
# For each sample we input the integer identifiers
# of a single user and a single item
class DeepRegressionModel(Model):
def __init__(self, embedding_size, max_user_id, max_item_id):
super().__init__()
self.user_embedding = Embedding(output_dim=embedding_size, input_dim=max_user_id + 1,
input_length=1, name='user_embedding')
self.item_embedding = Embedding(output_dim=embedding_size, input_dim=max_item_id + 1,
input_length=1, name='item_embedding')
# The following two layers don't have parameters.
self.flatten = Flatten()
self.concat = Concatenate()
## Error 1: Dropout was too high, preventing any training
self.dropout = Dropout(0.5)
self.dense1 = Dense(64, activation="relu")
## Error 2: output dimension was 2 where we predict only 1-d rating
## Error 3: tanh activation squashes the outputs between -1 and 1
## when we want to predict values between 1 and 5
self.dense2 = Dense(1)
def call(self, inputs):
user_inputs = inputs[0]
item_inputs = inputs[1]
user_vecs = self.flatten(self.user_embedding(user_inputs))
item_vecs = self.flatten(self.item_embedding(item_inputs))
input_vecs = self.concat([user_vecs, item_vecs])
y = self.dropout(input_vecs)
y = self.dense1(y)
y = self.dense2(y)
return y
model = DeepRegressionModel(30, max_user_id, max_item_id)
## Error 4: A binary crossentropy loss is only useful for binary
## classification, while we are in regression (use mse or mae)
model.compile(optimizer='adam', loss='mae')
initial_train_preds = model.predict([user_id_train, item_id_train])
%%time
history = model.fit([user_id_train, item_id_train], rating_train,
batch_size=64, epochs=5, validation_split=0.1,
shuffle=True)
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.ylim(0, 2)
plt.legend(loc='best')
plt.title('Loss');
train_preds = model.predict([user_id_train, item_id_train])
print("Final train MSE: %0.3f" % mean_squared_error(train_preds, rating_train))
print("Final train MAE: %0.3f" % mean_absolute_error(train_preds, rating_train))
test_preds = model.predict([user_id_test, item_id_test])
print("Final test MSE: %0.3f" % mean_squared_error(test_preds, rating_test))
print("Final test MAE: %0.3f" % mean_absolute_error(test_preds, rating_test))
```
### Home assignment:
- Add another layer, compare train/test error
- What do you notice?
- Try adding more dropout and modifying layer sizes: should you increase
or decrease the number of parameters
### Model Embeddings
- It is possible to retrieve the embeddings by simply using the Keras function `model.get_weights` which returns all the model learnable parameters.
- The weights are returned the same order as they were build in the model
- What is the total number of parameters?
```
# weights and shape
weights = model.get_weights()
[w.shape for w in weights]
# Solution:
model.summary()
user_embeddings = weights[0]
item_embeddings = weights[1]
print("First item name from metadata:", items["title"][1])
print("Embedding vector for the first item:")
print(item_embeddings[1])
print("shape:", item_embeddings[1].shape)
```
### Finding most similar items
Finding k most similar items to a point in embedding space
- Write in numpy a function to compute the cosine similarity between two points in embedding space
- Write a function which computes the euclidean distance between a point in embedding space and all other points
- Write a most similar function, which returns the k item names with lowest euclidean distance
- Try with a movie index, such as 181 (Return of the Jedi). What do you observe? Don't expect miracles on such a small training set.
Notes:
- you may use `np.linalg.norm` to compute the norm of vector, and you may specify the `axis=`
- the numpy function `np.argsort(...)` enables to compute the sorted indices of a vector
- `items["name"][idxs]` returns the names of the items indexed by array idxs
```
# %load solutions/similarity.py
EPSILON = 1e-07
def cosine(x, y):
dot_pdt = np.dot(x, y.T)
norms = np.linalg.norm(x) * np.linalg.norm(y)
return dot_pdt / (norms + EPSILON)
# Computes cosine similarities between x and all item embeddings
def cosine_similarities(x):
dot_pdts = np.dot(item_embeddings, x)
norms = np.linalg.norm(x) * np.linalg.norm(item_embeddings, axis=1)
return dot_pdts / (norms + EPSILON)
# Computes euclidean distances between x and all item embeddings
def euclidean_distances(x):
return np.linalg.norm(item_embeddings - x, axis=1)
# Computes top_n most similar items to an idx,
def most_similar(idx, top_n=10, mode='euclidean'):
sorted_indexes=0
if mode == 'euclidean':
dists = euclidean_distances(item_embeddings[idx])
sorted_indexes = np.argsort(dists)
idxs = sorted_indexes[0:top_n]
return list(zip(items["title"][idxs], dists[idxs]))
else:
sims = cosine_similarities(item_embeddings[idx])
# [::-1] makes it possible to reverse the order of a numpy
# array, this is required because most similar items have
# a larger cosine similarity value
sorted_indexes = np.argsort(sims)[::-1]
idxs = sorted_indexes[0:top_n]
return list(zip(items["title"][idxs], sims[idxs]))
# sanity checks:
print("cosine of item 1 and item 1: %0.3f"
% cosine(item_embeddings[1], item_embeddings[1]))
euc_dists = euclidean_distances(item_embeddings[1])
print(euc_dists.shape)
print(euc_dists[1:5])
print()
# Test on movie 181: Return of the Jedi
print("Items closest to 'Return of the Jedi':")
for title, dist in most_similar(181, mode="euclidean"):
print(title, dist)
# We observe that the embedding is poor at representing similarities
# between movies, as most distance/similarities are very small/big
# One may notice a few clusters though
# it's interesting to plot the following distributions
# plt.hist(euc_dists)
# The reason for that is that the number of ratings is low and the embedding
# does not automatically capture semantic relationships in that context.
# Better representations arise with higher number of ratings, and less overfitting
# in models or maybe better loss function, such as those based on implicit
# feedback.
```
### Visualizing embeddings using TSNE
- we use scikit learn to visualize items embeddings
- Try different perplexities, and visualize user embeddings as well
- What can you conclude ?
```
from sklearn.manifold import TSNE
item_tsne = TSNE(perplexity=30).fit_transform(item_embeddings)
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
plt.scatter(item_tsne[:, 0], item_tsne[:, 1]);
plt.xticks(()); plt.yticks(());
plt.show()
```
Alternatively with [Uniform Manifold Approximation and Projection](https://github.com/lmcinnes/umap):
```
!pip install umap-learn
import umap
item_umap = umap.UMAP().fit_transform(item_embeddings)
plt.figure(figsize=(10, 10))
plt.scatter(item_umap[:, 0], item_umap[:, 1]);
plt.xticks(()); plt.yticks(());
plt.show()
```
## Using item metadata in the model
Using a similar framework as previously, we will build another deep model that can also leverage additional metadata. The resulting system is therefore an **Hybrid Recommender System** that does both **Collaborative Filtering** and **Content-based recommendations**.
<img src="images/rec_archi_3.svg" style="width: 600px;" />
```
from sklearn.preprocessing import QuantileTransformer
meta_columns = ['popularity', 'release_year']
scaler = QuantileTransformer()
item_meta_train = scaler.fit_transform(ratings_train[meta_columns])
item_meta_test = scaler.transform(ratings_test[meta_columns])
class HybridModel(Model):
def __init__(self, embedding_size, max_user_id, max_item_id):
super().__init__()
self.user_embedding = Embedding(output_dim=embedding_size, input_dim=max_user_id + 1,
input_length=1, name='user_embedding')
self.item_embedding = Embedding(output_dim=embedding_size, input_dim=max_item_id + 1,
input_length=1, name='item_embedding')
# The following two layers don't have parameters.
self.flatten = Flatten()
self.concat = Concatenate()
self.dense1 = Dense(64, activation="relu")
self.dropout = Dropout(0.5)
self.dense2 = Dense(32, activation='relu')
self.dense3 = Dense(2, activation="tanh")
def call(self, inputs):
user_inputs = inputs[0]
item_inputs = inputs[1]
meta_inputs = inputs[2]
user_vecs = self.flatten(self.user_embedding(user_inputs))
item_vecs = self.flatten(self.item_embedding(item_inputs))
input_vecs = self.concat([user_vecs, item_vecs, meta_inputs])
y = self.dense1(input_vecs)
y = self.dropout(y)
y = self.dense2(y)
y = self.dense3(y)
return y
model = DeepRecoModel(30, max_user_id, max_item_id)
model.compile(optimizer='adam', loss='mae')
initial_train_preds = model.predict([user_id_train, item_id_train, item_meta_train])
%%time
history = model.fit([user_id_train, item_id_train, item_meta_train], rating_train,
batch_size=64, epochs=15, validation_split=0.1,
shuffle=True)
test_preds = model.predict([user_id_test, item_id_test, item_meta_test])
print("Final test MSE: %0.3f" % mean_squared_error(test_preds, rating_test))
print("Final test MAE: %0.3f" % mean_absolute_error(test_preds, rating_test))
```
The additional metadata seem to improve the predictive power of the model a bit at least in terms of MAE.
### A recommendation function for a given user
Once the model is trained, the system can be used to recommend a few items for a user, that he/she hasn't already seen:
- we use the `model.predict` to compute the ratings a user would have given to all items
- we build a reco function that sorts these items and exclude those the user has already seen
```
indexed_items = items.set_index('item_id')
def recommend(user_id, top_n=10):
item_ids = range(1, max_item_id)
seen_mask = all_ratings["user_id"] == user_id
seen_movies = set(all_ratings[seen_mask]["item_id"])
item_ids = list(filter(lambda x: x not in seen_movies, item_ids))
print("User %d has seen %d movies, including:" % (user_id, len(seen_movies)))
for title in all_ratings[seen_mask].nlargest(20, 'popularity')['title']:
print(" ", title)
print("Computing ratings for %d other movies:" % len(item_ids))
item_ids = np.array(item_ids)
user_ids = np.zeros_like(item_ids)
user_ids[:] = user_id
items_meta = scaler.transform(indexed_items[meta_columns].loc[item_ids])
rating_preds = model.predict([user_ids, item_ids, items_meta])
item_ids = np.argsort(rating_preds[:, 0])[::-1].tolist()
rec_items = item_ids[:top_n]
return [(items["title"][movie], rating_preds[movie][0])
for movie in rec_items]
for title, pred_rating in recommend(5):
print(" %0.1f: %s" % (pred_rating, title))
```
### Home assignment: Predicting ratings as a classification problem
In this dataset, the ratings all belong to a finite set of possible values:
```
import numpy as np
np.unique(rating_train)
```
Maybe we can help the model by forcing it to predict those values by treating the problem as a multiclassification problem. The only required changes are:
- setting the final layer to output class membership probabities using a softmax activation with 5 outputs;
- optimize the categorical cross-entropy classification loss instead of a regression loss such as MSE or MAE.
```
# %load solutions/classification.py
class ClassificationModel(Model):
def __init__(self, embedding_size, max_user_id, max_item_id):
super().__init__()
self.user_embedding = Embedding(output_dim=embedding_size, input_dim=max_user_id + 1,
input_length=1, name='user_embedding')
self.item_embedding = Embedding(output_dim=embedding_size, input_dim=max_item_id + 1,
input_length=1, name='item_embedding')
# The following two layers don't have parameters.
self.flatten = Flatten()
self.concat = Concatenate()
self.dropout1 = Dropout(0.5)
self.dense1 = Dense(128, activation="relu")
self.dropout2 = Dropout(0.2)
self.dense2 = Dense(128, activation='relu')
self.dense3 = Dense(5, activation="softmax")
def call(self, inputs):
user_inputs = inputs[0]
item_inputs = inputs[1]
user_vecs = self.flatten(self.user_embedding(user_inputs))
item_vecs = self.flatten(self.item_embedding(item_inputs))
input_vecs = self.concat([user_vecs, item_vecs])
y = self.dropout1(input_vecs)
y = self.dense1(y)
y = self.dropout2(y)
y = self.dense2(y)
y = self.dense3(y)
return y
model = ClassificationModel(16, max_user_id, max_item_id)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
initial_train_preds = model.predict([user_id_train, item_id_train]).argmax(axis=1) + 1
print("Random init MSE: %0.3f" % mean_squared_error(initial_train_preds, rating_train))
print("Random init MAE: %0.3f" % mean_absolute_error(initial_train_preds, rating_train))
history = model.fit([user_id_train, item_id_train], rating_train - 1,
batch_size=64, epochs=15, validation_split=0.1,
shuffle=True)
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.ylim(0, 2)
plt.legend(loc='best')
plt.title('loss');
test_preds = model.predict([user_id_test, item_id_test]).argmax(axis=1) + 1
print("Final test MSE: %0.3f" % mean_squared_error(test_preds, rating_test))
print("Final test MAE: %0.3f" % mean_absolute_error(test_preds, rating_test))
```
|
github_jupyter
|
Author: Maxime Marin
@: [email protected]
# Accessing IMOS data case studies: Walk-through and interactive session - Analysis
In this notebook, we will provide a receipe for further analysis to be done on the same dataset we selected earlier. In the future, a similar notebook can be tailored to a particular dataset, performing analysis that is easily repeatable.
Alternatively, curious users can use the code to "tweak it" to their needs, and perform slightly different analysis and visualisation. This is why we have deliberately left some code in the cells, rather than hiding it.
As always, we start by importing our data and some main libraries
```
import sys
import os
sys.path.append('/home/jovyan/intake-aodn')
import intake_aodn
import matplotlib.pyplot as plt
from intake_aodn.plot import Clim_plot
from intake_aodn.analysis import lin_trend, make_clim
import xarray as xr
data = xr.open_dataset('Example_Data.nc')
```
***
## 1) Climatology
Calculating climatology of a particular variable is a very common operation performed in climate science. It allows to quantify the "mean" state of a particular variable and later substract it to the given variable to obtain anomalies.
The most common climatology is done on a yearly timescale. It is equivalent to yearly average and is useful to calculate linear trends:
```
# We will work with the box-average timeseries:
data_bavg = data.stack(space=['longitude','latitude']).mean(dim='space')
# Perform and plot annual climatology
clim,ax = Clim_plot(da = data_bavg['sea_surface_temperature'],time_res = 'year')
ylab = ax.get_ylabel() # stores the y-axis label to re-use later.
# Calculate the linear trend and confidence interval
coef,fit,hci,lci = lin_trend(clim,'year')
# Plot the linear model
fit['linear_fit'].plot(ax=ax,color='red',label = 'trend')
plt.fill_between(lci['year'].values,lci['linear_fit'].values,hci['linear_fit'].values,alpha=0.2,color='red')
# add label, title and legend
ax.set_ylabel(ylab)
ax.set_title('Annual Mean')
plt.legend();
```
We have plotted the annual averages of box-averaged SST, along with the correspinding linear trend. However, something seems to be off..
The first and last yearly values appear to be underestimated and overestimated, respectively. Why is that? (Hint: try to execute `clim.year[0]`, and `data.time[0]`. Hint 2: the access the last index of a list, we use the `[-1]`.)
```
# type code here
```
The cell below outputs the same plot as before, but the 7th line is different. The function `Clim_plot()` takes `time_main` as an argument, which defines what time period we are interested in. Can we change line 7 to get a better plot?...
```
# We will work with the box-average timeseries:
data_bavg = data.stack(space=['longitude','latitude']).mean(dim='space')
# Perform and plot annual climatology
clim,ax = Clim_plot(da = data_bavg['sea_surface_temperature'],time_res = 'year',time_main = ['1992-01-01','2021-12-31'])
ylab = ax.get_ylabel() # stores the y-axis label to re-use later.
# Calculate the linear trend and confidence interval
coef,fit,hci,lci = lin_trend(clim,'year')
# Plot the linear model
fit['linear_fit'].plot(ax=ax,color='red',label = 'trend')
plt.fill_between(lci['year'].values,lci['linear_fit'].values,hci['linear_fit'].values,alpha=0.2,color='red')
# add label, title and legend
ax.set_ylabel(ylab)
ax.set_title('Annual Mean')
plt.legend();
```
***
## 2) Monthly climatology
Using the same function as before `Clim_plot` we can also calculate monthly climatology, which gives us the mean state of the variable for all months of the year.
To do so, we just have to change the `time_res` argument such as `time_res = 'month'`
```
clim,ax = Clim_plot(data_bavg['sea_surface_temperature'],time_res = 'month',time_main = ['1992-01-01','2021-12-31'],time_recent = ['2011-01-01', None],ind_yr = [2011,2018 ,2019])
ax.set_title('Monthly Climatology');
```
We notice that the function takes several other arguments than before, including `time_recent` and `ind_yr`.
`time_recent` tells the function to also plot monthly climatology for a "more recent" time period, reflecting the recent mean state.
` ind_yr` let the user chose individual years to plot. This is useful to compare one particular year with the average. For example, we clearly see that the 2011 summer was way warmer than usual!
Note: You can add these arguments if `time_res = 'year'`, but it will not do anything as it has no purpose for annual climatology.
***
## 3) Linear trends
It can also be useful to visualise the spatial distribution of long-term changes or trends. Rather than plotting a linear trend from one timeseries, let's calculate linear trend coefficients at all pixels and map it.
```
from intake_aodn.plot import map_var, create_cb
from intake_aodn.analysis import make_clim
import cmocean
import numpy as np
#First, we calculate yearly averages
clim = make_clim(data['sea_surface_temperature'],time_res = 'year')
#Then we can compute our linear models
coef,fit,hci,lci= lin_trend(clim[0],'year',deg=1)
#We rename a variable so that our plot makes more sense
coef = coef.rename({'polyfit_coefficients':'SST trend [C/decade]'})
#let's plot
fig = plt.figure(figsize=(30,8))
ax,gl,axproj = map_var((coef.isel(degree=0)['SST trend [C/decade]']*10),[coef.longitude.min(),coef.longitude.max()],[coef.latitude.min(),coef.latitude.max()],
title = 'Linear Trends',
cmap = cmocean.cm.balance,
add_colorbar = False,
vmin = -0.4,vmax = 0.4,
levels=np.arange(-0.4,0.45,0.05));
cb = create_cb(fig,ax,axproj,'SST trend [C/decade]',size = "4%", pad = 0.5,labelpad = 20,fontsize=20)
```
***
## 4) Anomalies
Finally, let's use climatology to compute anomalies. This is particularly usefull to understand inter-annual variability of the system (ENSO influence), which requires removing the seasonal cycle.
```
from intake_aodn.analysis import time_average, make_clim
# Make monthly anomalies
mn_ano = time_average(data,'M',var='sea_surface_temperature',ignore_inc = True).groupby('time.month') - make_clim(data['sea_surface_temperature'],time_res = 'month')[0]
# Compute box-averge and plot
bavg = mn_ano.stack(space=['longitude','latitude']).mean(dim='space')
fig = plt.figure(figsize=(12,8))
bavg.plot(label='monthly',color='black')
# Make yearly running anomalies on monthly timescale
yr_rol = bavg.rolling(time = 12, center=True).mean()
# plot smoothed timeseries
yr_rol.plot(label='annual',color='red',lw = 2)
plt.legend();
# fix xlim
xl = mn_ano.coords['time'].values
plt.xlim(xl.min(),xl.max())
```
The plot shows that monthly variability can be quite important compared to inter-annual variability, hence why smoothing can enhance important inter-annual patterns.
***
## 5) Multi maps
Finally, we give the possibility to the user to make a publication-ready subplot map of a statistic of their data.
The first step requires the user to define the data to plot. Let's imagine we want to plot monthly SST anomalies that we calculated in part 4, over a specific year.
Once the data is ready, we simply have to call `multimap()` function, give it the dimension of the subplot panels and the number of columns we want:
```
from intake_aodn.plot import multimap
import cmocean
from intake_aodn.analysis import time_average, make_clim
# Make monthly anomalies
da = time_average(data,'M',var='sea_surface_temperature',ignore_inc = True).groupby('time.month') - make_clim(data['sea_surface_temperature'],time_res = 'month')[0]
# data to plot
da = da.sel(time = slice('2011-01-01','2011-12-31'))
fig = multimap(da,col = 'time',col_wrap=4,freq = 'month')
```
***
## 6) Saving Figures
*But where is the save button!*
I know, I did not place a save button. But remember, you can always save any image by right-clicking on it. We are working on a browser!
Of course, we might want to save figures at a better quality, especially for publications. In reality, saving a plot is very easy, just insert the one-liner below at the end of any cell, chosing your file name, and format:
```
plt.gcf().savefig("filename.changeformathere");#plt.gcf() indicates you want to save the latest figure.
```
|
github_jupyter
|
```
"""The purpose of this tutorial is to introduce you to:
(1) how gradient-based optimization of neural networks
operates in concrete practice, and
(2) how different forms of learning rules lead to more or less
efficient learning as a function of the shape of the optimization
landscape
This tutorial should be used in conjunction with the lecture:
http://cs375.stanford.edu/lectures/lecture6_optimization.pdf
""";
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
#the above imports the plotting library matplotlib
#standard imports
import time
import numpy as np
import h5py
#We're not using the GPU here, so we set the
#"CUDA_VISIBLE_DEVICES" environment variable to -1
#which tells tensorflow to only use the CPU
import os
os.environ["CUDA_VISIBLE_DEVICES"]="-1"
import tensorflow as tf
```
## Gradient Descent
```
#let's define a model which "believes" that the output data
#is scalar power of a scalar input, e.g. :
# y ~ x^p
#defining the scalar input data variable
batch_size = 200
#the "placeholder" mechanis is similar in effect to
# x = tf.get_variable('x', shape=(batch_size,), dtype=tf.float32)
#except we don't have to define a fixed name "x"
x = tf.placeholder(shape=(batch_size,), dtype=tf.float32)
#define the scalar power variable
initial_power = tf.zeros(shape=())
power = tf.get_variable('pow', initializer=initial_power, dtype=tf.float32)
#define the model
model = x**power
#the output data needs a variable too
y = tf.placeholder(shape=(batch_size,), dtype=tf.float32)
#the error rate of the model is mean L2 distance across
#the batch of data
power_loss = tf.reduce_mean((model - y)**2)
#now, our goal is to use gradient descent to
#figure out the parameter of our model -- namely, the power variable
grad = tf.gradients(power_loss, power)[0]
#Let's fit (optimize) the model.
#to do that we'll have to first of course define a tensorflow session
sess = tf.Session()
#... and initialize the power variable
initializer = tf.global_variables_initializer()
sess.run(initializer)
#ok ... so let's test the case where the true input-output relationship
#is x --> x^2
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**2
#OK
initial_guess = 0
assign_op = tf.assign(power, initial_guess)
sess.run(assign_op)
gradval = sess.run(grad, feed_dict={x: xval, y: yval})
gradval
#ok so this is telling us to do:
new_guess = initial_guess + -1 * (gradval)
print(new_guess)
#ok so let's assign the new guess to the power variable
assign_op = tf.assign(power, new_guess)
sess.run(assign_op)
#... and get the gradient again
gradval = sess.run(grad, feed_dict={x: xval, y: yval})
gradval
new_guess = new_guess + -1 * (gradval)
print(new_guess)
#... and one more time ...
assign_op = tf.assign(power, new_guess)
sess.run(assign_op)
#... get the gradient again
gradval = sess.run(grad, feed_dict={x: xval, y: yval})
print('gradient: %.3f', gradval)
#... do the update
new_guess = new_guess + -1 * (gradval)
print('power: %.3f', new_guess)
#ok so we're hovering back and forth around guess of 2.... which is right!
#OK let's do this in a real loop and keep track of useful stuff along the way
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**2
#start the guess off at 0 again
assign_op = tf.assign(power, 0)
sess.run(assign_op)
#let's keep track of the guess along the way
powers = []
#and the loss, which should go down
losses = []
#and the grads just for luck
grads = []
#let's iterate the gradient descent process 20 timesteps
num_iterations = 20
#for each timestep ...
for i in range(num_iterations):
#... get the current derivative (grad), the current guess of "power"
#and the loss, given the input and output training data (xval & yval)
cur_power, cur_loss, gradval = sess.run([power, power_loss, grad],
feed_dict={x: xval, y: yval})
#... keep track of interesting stuff along the way
powers.append(cur_power)
losses.append(cur_loss)
grads.append(gradval)
#... now do the gradient descent step
new_power = cur_power - gradval
#... and actually update the value of the power variable
assign_op = tf.assign(power, new_power)
sess.run(assign_op)
#and then, the loop runs again
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.plot(grads, label='gradients')
plt.xlabel('iterations')
plt.legend(loc='lower right')
plt.title('Estimating a quadratic')
##ok now let's try that again except where y ~ x^3
#all we need to do is change the data
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**3
#The rest of the code remains the same
assign_op = tf.assign(power, 0)
sess.run(assign_op)
powers = []
losses = []
grads = []
num_iterations = 20
for i in range(num_iterations):
cur_power, cur_loss, gradval = sess.run([power, power_loss, grad],
feed_dict={x: xval, y: yval})
powers.append(cur_power)
losses.append(cur_loss)
grads.append(gradval)
new_power = cur_power - gradval
assign_op = tf.assign(power, new_power)
sess.run(assign_op)
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.xlabel('iterations')
plt.legend(loc='center right')
plt.title('Failing to estimate a cubic')
#wait ... this did *not* work. why?
#whoa ... the loss must have diverged to infinity (or close) really early
losses
#why?
#let's look at the gradients
grads
#hm. the gradient was getting big at the end.
#after all, the taylor series only works in the close-to-the-value limit.
#we must have been been taking too big steps.
#how do we fix this?
```
### With Learning Rate
```
def gradient_descent(loss,
target,
initial_guess,
learning_rate,
training_data,
num_iterations):
#assign initial value to the target
initial_op = tf.assign(target, initial_guess)
#get the gradient
grad = tf.gradients(loss, target)[0]
#actually do the gradient descent step directly in tensorflow
newval = tf.add(target, tf.multiply(-grad, learning_rate))
#the optimizer step actually performs the parameter update
optimizer_op = tf.assign(target, newval)
#NB: none of the four steps above are actually running anything yet
#They are just formal graph computations.
#to actually do anything, you have to run stuff in a session.
#set up containers for stuff we want to keep track of
targetvals = []
losses = []
gradvals = []
#first actually run the initialization operation
sess.run(initial_op)
#now take gradient steps in a loop
for i in range(num_iterations):
#just by virtue of calling "run" on the "optimizer" op,
#the optimization occurs ...
output = sess.run({'opt': optimizer_op,
'grad': grad,
'target': target,
'loss': loss
},
feed_dict=training_data)
targetvals.append(output['target'])
losses.append(output['loss'])
gradvals.append(output['grad'])
return losses, targetvals, gradvals
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**3
data_dict = {x: xval, y:yval}
losses, powers, grads = gradient_descent(loss=power_loss,
target=power,
initial_guess=0,
learning_rate=.25, #chose learning rate < 1
training_data=data_dict,
num_iterations=20)
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.legend(loc='upper right')
plt.title('Estimating a cubic')
#ok -- now the result stably converges!
#and also for a higher power ....
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**4
data_dict = {x: xval, y:yval}
losses, powers, grads = gradient_descent(loss=power_loss,
target=power,
initial_guess=0,
learning_rate=0.1,
training_data=data_dict,
num_iterations=100)
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.legend(loc='upper right')
plt.title('Estimating a quartic')
#what about when the data is actually not of the right form?
xval = np.arange(0, 2, .01)
yval = np.sin(xval)
data_dict = {x: xval, y:yval}
losses, powers, grads = gradient_descent(loss=power_loss,
target=power,
initial_guess=0,
learning_rate=0.1,
training_data=data_dict,
num_iterations=20)
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.legend(loc='center right')
plt.title('Estimating sine with a power, not converged yet')
#doesn't look like it's converged yet -- maybe we need to run it longer?
#sine(x) now with more iterations
xval = np.arange(0, 2, .01)
yval = np.sin(xval)
data_dict = {x: xval, y:yval}
losses, powers, grads = gradient_descent(loss=power_loss,
target=power,
initial_guess=0,
learning_rate=0.1,
training_data=data_dict,
num_iterations=100) #<-- more iterations
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.legend(loc='center right')
plt.title('Estimating sine with a power (badly)')
#ok it's converged but not to a great loss. This is unsurprising
#since x^p is a bad model for sine(x)
#how should we improve?
#THE MACHINE LEARNING ANSWER: well, let's have more parameters in our model!
#actually, let's write a model using the Taylor series idea more explicitly:
# y ~ sum_i a_i x^i
#for some coefficients a_i that we have to learn
#let's go out to x^5, so approx_order = 7 (remember, we're 0-indexing in python)
approximation_order = 6
#ok so now let's define the variabe we'll be using
#instead of "power" this will be coefficients of the powers
#with one coefficient for each power from 0 to approximation_order-1
coefficients = tf.get_variable('coefficients',
initializer = tf.zeros(shape=(approximation_order,)),
dtype=tf.float32)
#gotta run the initializer again b/c we just defined a new trainable variable
initializer = tf.global_variables_initializer()
sess.run(initializer)
sess.run(coefficients)
#Ok let's define the model
#here's the vector of exponents
powervec = tf.range(0, approximation_order, dtype=tf.float32)
#we want to do essentially:
# sum_i coefficient_i * x^powervec[i]
#but to do x^powervec, we need to create an additional dimension on x
x_expanded = tf.expand_dims(x, axis=1)
#ok, now we can actually do x^powervec
x_exponentiated = x_expanded**powervec
#now multiply by the coefficient variable
x_multiplied_by_coefficients = coefficients * x_exponentiated
#and add up over the 1st dimension e.g. dong the sum_i
polynomial_model = tf.reduce_sum(x_multiplied_by_coefficients, axis=1)
#the loss is again l2 difference between prediction and desired output
polynomial_loss = tf.reduce_mean((polynomial_model - y)**2)
xval = np.arange(-2, 2, .02)
yval = np.sin(xval)
data_dict = {x: xval, y:yval}
#starting out at 0 since the coefficients were all intialized to 0
sess.run(polynomial_model, feed_dict=data_dict)
#ok let's try it
losses, coefvals, grads = gradient_descent(loss=polynomial_loss,
target=coefficients,
initial_guess=np.zeros(approximation_order),
learning_rate=0.1,
training_data=data_dict,
num_iterations=100)
#ok, so for each timstep we have 6 values -- the coefficients
print(len(coefvals))
coefvals[-1].shape
#here's the last set of coefficients learned
coefvals[-1]
#whoa -- what's going on?
#let's lower the learning rate
losses, coefvals, grads = gradient_descent(loss=polynomial_loss,
target=coefficients,
initial_guess=np.zeros(approximation_order),
learning_rate=0.005, #<-- lowered learning rate
training_data=data_dict,
num_iterations=100)
#ok not quite as bad
coefvals[-1]
#let's visualize what we learned
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.plot(xval, yval)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
#ok, fine, but not great
#what if we let it run longer?
losses, coefvals, grads = gradient_descent(loss=polynomial_loss,
target=coefficients,
initial_guess=np.zeros(approximation_order),
learning_rate=0.005,
training_data=data_dict,
num_iterations=5000) #<-- more iterations
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.plot(xval, yval)
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Gradient Descent')
#ok much better
coefvals[-1]
tf.Variable(np.zeros(6))
```
### With momentum
```
def gradient_descent_with_momentum(loss,
target,
initial_guess,
learning_rate,
momentum,
training_data,
num_iterations):
#set target to initial guess
initial_op = tf.assign(target, initial_guess)
#get gradient
grad = tf.gradients(loss, target)[0]
#set up the variable for the gradient accumulation
grad_shp = grad.shape.as_list()
#needs to be specified as float32 to interact properly with other things (but numpy defaults to float64)
grad_accum = tf.Variable(np.zeros(grad_shp).astype(np.float32))
#gradplus = grad + momentum * grad_accum
gradplus = tf.add(grad, tf.multiply(grad_accum, momentum))
#newval = oldval - learning_rate * gradplus
newval = tf.add(target, tf.multiply(-gradplus, learning_rate))
#the optimizer step actually performs the parameter update
optimizer_op = tf.assign(target, newval)
#this step updates grad_accum
update_accum = tf.assign(grad_accum, gradplus)
#run initialization
sess.run(initial_op)
#necessary b/c we've defined a new variable ("grad_accum") above
init_op = tf.global_variables_initializer()
sess.run(init_op)
#run the loop
targetvals = []
losses = []
gradvals = []
times = []
for i in range(num_iterations):
t0 = time.time()
output = sess.run({'opt': optimizer_op, #have to have this for optimization to occur
'accum': update_accum, #have to have this for grad_accum to update
'grad': grad, #the rest of these are just for keeping track
'target': target,
'loss': loss
},
feed_dict=training_data)
times.append(time.time() - t0)
targetvals.append(output['target'])
losses.append(output['loss'])
gradvals.append(output['grad'])
print('Average time per iteration --> %.5f' % np.mean(times))
return losses, targetvals, gradvals
losses, coefvals, grads = gradient_descent_with_momentum(loss=polynomial_loss,
target=coefficients,
initial_guess=np.zeros(approximation_order),
learning_rate=0.01, #<-- can use higher learning rate!
momentum=0.9,
training_data=data_dict,
num_iterations=250) #<-- can get away from fewer iterations!
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.plot(xval, yval)
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Gradient Descent')
#so momentum is really useful
```
### Tensorflow's Built-In Optimizers
```
def tf_builtin_optimization(loss,
optimizer_class,
target,
training_data,
num_iterations,
optimizer_args=(),
optimizer_kwargs={},
):
#construct the optimizer
optimizer = optimizer_class(*optimizer_args,
**optimizer_kwargs)
#formal tensorflow optimizers will always have a "minimize" method
#this is how you actually get the optimizer op
optimizer_op = optimizer.minimize(loss)
init_op = tf.global_variables_initializer()
sess.run(init_op)
targetvals = []
losses = []
times = []
for i in range(num_iterations):
t0 = time.time()
output = sess.run({'opt': optimizer_op,
'target': target,
'loss': loss},
feed_dict=training_data)
times.append(time.time() - t0)
targetvals.append(output['target'])
losses.append(output['loss'])
print('Average time per iteration --> %.5f' % np.mean(times))
return np.array(losses), targetvals
xval = np.arange(-2, 2, .02)
yval = np.sin(xval)
data_dict = {x: xval, y:yval}
losses, coefvals = tf_builtin_optimization(loss=polynomial_loss,
optimizer_class=tf.train.GradientDescentOptimizer,
target=coefficients,
training_data=data_dict,
num_iterations=5000,
optimizer_args=(0.005,),
) #<-- more iterations
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.plot(xval, yval)
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Gradient Descent')
#right ok, we recovered what we did before by hand, now using
#the standard tensorflow tools
#Let's use the Momentum Optimizer. standard parameters for learning
#are learning_rate = 0.01 and momentum = 0.9
xval = np.arange(-2, 2, .02)
yval = np.sin(xval )
data_dict = {x: xval, y:yval}
losses, coefvals = tf_builtin_optimization(loss=polynomial_loss,
optimizer_class=tf.train.MomentumOptimizer,
target=coefficients,
training_data=data_dict,
num_iterations=250,
optimizer_kwargs={'learning_rate': 0.01,
'momentum': 0.9})
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.plot(xval, yval)
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Momentum Optimizer')
#again reproducing what we see before by hand
#and we can try some other stuff, such as the Adam Optimizer
losses, coefvals = tf_builtin_optimization(loss=polynomial_loss,
optimizer_class=tf.train.AdamOptimizer,
target=coefficients,
training_data=data_dict,
num_iterations=500,
optimizer_kwargs={'learning_rate': 0.01})
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.plot(xval, yval)
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Adam optimizer')
#Adam as usualy requires a bit more steps than Momentum -- but the advantage of Adam
#is that sometimes Momentum blows up and Adam is usually more stable
#(compare the loss traces! even though Momentum didn't below up above, it's
#loss is much more jaggedy -- signs up potential blowup)
#so hm ... maybe because Adam is more stable we can jack up the
#initial learning rate and thus converge even faster than with Momentum
losses, coefvals = tf_builtin_optimization(loss=polynomial_loss,
optimizer_class=tf.train.AdamOptimizer,
target=coefficients,
training_data=data_dict,
num_iterations=150,
optimizer_kwargs={'learning_rate': 0.5})
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.plot(xval, yval)
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Adam optimizer\nhigh initial learning rate')
#indeed we can!
```
### Newton's Method (Second Order)
```
def newtons_method(loss,
target,
initial_guess,
training_data,
num_iterations,
grad2clip=1.):
#create initialization operation
initial_op = tf.assign(target, initial_guess)
grad = tf.gradients(loss, target)[0]
#to actually compute the second order correction
#we split the one-variable and multi-variable cases up -- for ease of working
if len(target.shape) == 0: #one-variable case
#actually get the second derivative
grad2 = tf.gradients(grad, target)[0]
#now morally we want to compute:
# newval = target - grad / grad2
#BUT there is often numerical instability caused by dividing
#by grad2 if grad2 is small... so we have to clip grad2 by a clip value
clippedgrad2 = tf.maximum(grad2, grad2clip)
#and now we can do the newton's formula update
newval = tf.add(target, -tf.divide(grad, clippedgrad2))
else:
#in the multi-variable case, we first compute the hessian matrix
#thank gosh tensorflow has this built in finally!
hess = tf.hessians(loss, target)[0]
#now we take it's inverse
hess_inv = tf.matrix_inverse(hess)
#now we get H^{-1} grad, e.g. multiple the matrix by the vector
hess_inv_grad = tf.tensordot(hess_inv, grad, 1)
#again we have to clip for numerical stability
hess_inv_grad = tf.clip_by_value(hess_inv_grad, -grad2clip, grad2clip)
#and get the new value for the parameters
newval = tf.add(target, -hess_inv_grad)
#the rest of the code is just as in the gradient descent case
optimizer_op = tf.assign(target, newval)
targetvals = []
losses = []
gradvals = []
sess.run(initial_op)
for i in range(num_iterations):
output = sess.run({'opt': optimizer_op,
'grad': grad,
'target': target,
'loss': loss},
feed_dict=training_data)
targetvals.append(output['target'])
losses.append(output['loss'])
gradvals.append(output['grad'])
return losses, targetvals, gradvals
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**2
data_dict = {x: xval, y:yval}
losses, powers, grads = newtons_method(loss=power_loss,
target=power,
initial_guess=0,
training_data=data_dict,
num_iterations=20,
grad2clip=1)
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.legend(loc='upper right')
plt.title("Newton's Method on Quadractic")
#whoa -- much faster than before
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**3
data_dict = {x: xval, y:yval}
losses, powers, grads = newtons_method(loss=power_loss,
target=power,
initial_guess=0,
training_data=data_dict,
num_iterations=20,
grad2clip=1)
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.legend(loc='upper right')
plt.title("Newton's Method on a Cubic")
xval = np.arange(-2, 2, .02)
yval = np.sin(xval)
data_dict = {x: xval, y:yval}
losses, coefvals, grads = newtons_method(loss=polynomial_loss,
target=coefficients,
initial_guess=np.zeros(approximation_order),
training_data=data_dict,
num_iterations=2)
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, yval)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
#no joke -- the error goes to 0 after 1 update step
#let's try something a little more complicated
xval = np.arange(-2, 2, .02)
yval = np.cos(2 * xval) + np.sin(xval + 1)
data_dict = {x: xval, y:yval}
losses, coefvals, grads = newtons_method(loss=polynomial_loss,
target=coefficients,
initial_guess=np.zeros(approximation_order),
training_data=data_dict,
num_iterations=5)
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, yval)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
#really fast -- actually Newton's method always converges this fast if
#the model is polynomial
#just to put the above in context, let's compare to momentum
xval = np.arange(-2, 2, .02)
yval = np.cos(2 * xval) + np.sin(xval + 1)
data_dict = {x: xval, y:yval}
losses, coefvals = tf_builtin_optimization(loss=polynomial_loss,
optimizer_class=tf.train.MomentumOptimizer,
target=coefficients,
training_data=data_dict,
num_iterations=200,
optimizer_kwargs={'learning_rate': 0.01,
'momentum': 0.9},
)
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, yval)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
```
### Using External Optimizers
```
#actually, let's use an *external* optimizer -- not do
#the optimization itself in tensorflow
from scipy.optimize import minimize
#you can see all the methods for optimization here:
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize
#Ok here's the model we want to learn
xval = np.arange(-2, 2, .02)
yval = np.cosh(2 * xval) + np.sin(xval + 1)
plt.plot(xval, yval)
plt.title("Target to Learn")
polynomial_loss
#we need to make a python function from our tensorflow model
#(actually we could simply write the model directly in numpy
#but ... since we already have it in Tensorflow might as well use it
def func_loss(vals):
data_dict = {x: xval,
y: yval,
coefficients: vals}
lossval = sess.run(polynomial_loss, feed_dict=data_dict)
losses.append(lossval)
return lossval
#Ok, so let's use a method that doesn't care about the derivative
#specifically "Nelder-Mead" -- this is a simplex-based method
losses = []
result = minimize(func_loss,
x0=np.zeros(6),
method='Nelder-Mead')
x0 = result.x
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, yval, label='True')
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}), label='Appox.')
plt.legend(loc='upper center')
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Nelder-Mead')
#OK now let's try a method that *does* care about the derivative
#specifically, a method called L-BFGS -- this is basically
#an approximate version of the newton's method.
#It's called a "quasi-second-order" method because it uses only
#first derivatives to get an approximation to the second derivative
#to use it, we need *do* need to calculate the derivative
#... and here's why tensorflow STILL matters even if we're using
#an external optimizer
polynomial_grad = tf.gradients(polynomial_loss, coefficients)[0]
#we need to create a function that returns loss and loss derivative
def func_loss_with_grad(vals):
data_dict = {x: xval,
y:yval,
coefficients: vals}
lossval, g = sess.run([polynomial_loss, polynomial_grad],
feed_dict=data_dict)
losses.append(lossval)
return lossval, g.astype(np.float64)
#Ok, so let's see what happens with L-BFGS
losses = []
result = minimize(func_loss_with_grad,
x0=np.zeros(6),
method='L-BFGS-B', #approximation of newton's method
jac=True #<-- meaning, we're telling minimizer
#to use the derivative info -- the so-called
#"jacobian"
)
x0 = result.x
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, yval, label='True')
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}), label='Appox.')
plt.legend(loc='upper center')
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with L-BFGS')
#substantially better than the non-derivative-based method
#-- fewer interations are needed, loss curve is stabler, and final
#results are better
```
## Deploying it in a real case
```
#ok let's load the neural data
DATA_PATH = "/home/chengxuz/Class/psych253_2018/data/ventral_neural_data.hdf5"
Ventral_Dataset = h5py.File(DATA_PATH)
categories = Ventral_Dataset['image_meta']['category'][:] #array of category labels for all images --> shape == (5760,)
unique_categories = np.unique(categories) #array of unique category labels --> shape == (8,)
var_levels = Ventral_Dataset['image_meta']['variation_level'][:]
Neural_Data = Ventral_Dataset['time_averaged_trial_averaged'][:]
num_neurons = Neural_Data.shape[1]
num_categories = 8
categories[:10]
#we'll construct 8 one-vs-all vectors with {-1, 1} values
category_matrix = np.array([2 * (categories == c) - 1 for
c in unique_categories]).T.astype(int)
category_matrix[0]
sess = tf.Session()
#first, get initializers for W and b
initial_weights = tf.random_uniform(shape=(num_neurons, num_categories),
minval=-1,
maxval=1,
seed=0)
initial_bias = tf.zeros(shape=(num_categories,))
#now construct the TF variables
weights = tf.get_variable('weights',
dtype=tf.float32,
initializer=initial_weights)
bias = tf.get_variable('bias',
dtype=tf.float32,
initializer=initial_bias)#initialize variables
init_op = tf.global_variables_initializer()
sess.run(init_op)
#input slots for data and labels
#note the batch size is "None" -- effectively meaning batches of
#varying sizes can be used
neural_data = tf.placeholder(shape=(None, num_neurons),
dtype=tf.float32)
category_labels = tf.placeholder(shape=(None, num_categories),
dtype=tf.float32)
#now construct margins
margins = tf.matmul(neural_data, weights) + bias
#the hinge loss
hinge_loss = tf.maximum(0., 1. - category_labels * margins)
#and take the mean of the loss over the batch
hinge_loss_mean = tf.reduce_mean(hinge_loss)
#simple interface for using tensorflow built-in optimizer
#as seen yesterclass
def tf_optimize(loss,
optimizer_class,
target,
training_data,
num_iterations,
optimizer_args=(),
optimizer_kwargs=None,
sess=None,
initial_guesses=None):
if sess is None:
sess = tf.Session()
if optimizer_kwargs is None:
optimizer_kwargs = {}
#construct the optimizer
optimizer = optimizer_class(*optimizer_args,
**optimizer_kwargs)
optimizer_op = optimizer.minimize(loss)
#initialize variables
init_op = tf.global_variables_initializer()
sess.run(init_op)
if initial_guesses is not None:
for k, v in initial_guesses.items():
op = tf.assign(k, v)
sess.run(op)
targetvals = []
losses = []
times = []
for i in range(num_iterations):
t0 = time.time()
output = sess.run({'opt': optimizer_op,
'target': target,
'loss': loss},
feed_dict=training_data)
times.append(time.time() - t0)
targetvals.append(output['target'])
losses.append(output['loss'])
print('Average time per iteration --> %.5f' % np.mean(times))
return np.array(losses), targetvals
#let's just focus on one batch of data for the moment
batch_size = 640
data_batch = Neural_Data[0: batch_size]
label_batch = category_matrix[0: batch_size]
data_dict = {neural_data: data_batch,
category_labels: label_batch}
#let's look at the weights and biases before training
weight_vals, bias_vals = sess.run([weights, bias])
#right, it's num_neurons x num_categories
print('weights shape:', weight_vals.shape)
#let's look at some of the weights
plt.hist(weight_vals[:, 0])
plt.xlabel('Weight Value')
plt.ylabel('Neuron Count')
plt.title('Weights for Animals vs All')
print('biases:', bias_vals)
#ok so we'll use the Momentum optimizer to find weights and bias
#for this classification problem
losses, targs = tf_optimize(loss=hinge_loss_mean,
optimizer_class=tf.train.MomentumOptimizer,
target=[],
training_data=data_dict,
num_iterations=100,
optimizer_kwargs={'learning_rate': 1, 'momentum': 0.9},
sess=sess)
#losses decrease almost to 0
plt.plot(losses)
weight_vals, bias_vals = sess.run([weights, bias])
#right, it's num_neurons x num_categories
weight_vals.shape
#let's look at some of the weights
plt.hist(weight_vals[:, 2])
plt.xlabel('Weight Value')
plt.ylabel('Neuron Count')
plt.title('Weights for Faces vs All')
print('biases:', bias_vals)
#ok so things have been learned!
#how good are the results on training?
#actually get the predictions by first getting the margins
margin_vals = sess.run(margins, feed_dict = data_dict)
#now taking the argmax across categories
pred_inds = margin_vals.argmax(axis=1)
#compare prediction to actual
correct = pred_inds == label_batch.argmax(axis=1)
pct = correct.sum() / float(len(correct)) * 100
print('Training accuracy: %.2f%%' % pct)
#Right, very accurate on training
```
### Stochastic Gradient Descent
```
class BatchReader(object):
def __init__(self, data_dict, batch_size, shuffle=True, shuffle_seed=0, pad=True):
self.data_dict = data_dict
self.batch_size = batch_size
_k = data_dict.keys()[0]
self.data_length = data_dict[_k].shape[0]
self.total_batches = (self.data_length - 1) // self.batch_size + 1
self.curr_batch_num = 0
self.curr_epoch = 1
self.pad = pad
self.shuffle = shuffle
self.shuffle_seed = shuffle_seed
if self.shuffle:
self.rng = np.random.RandomState(seed=self.shuffle_seed)
self.perm = self.rng.permutation(self.data_length)
def __iter__(self):
return self
def next(self):
return self.get_next_batch()
def get_next_batch(self):
data = self.get_batch(self.curr_batch_num)
self.increment_batch_num()
return data
def increment_batch_num(self):
m = self.total_batches
if (self.curr_batch_num >= m - 1):
self.curr_epoch += 1
if self.shuffle:
self.perm = self.rng.permutation(self.data_length)
self.curr_batch_num = (self.curr_batch_num + 1) % m
def get_batch(self, cbn):
data = {}
startv = cbn * self.batch_size
endv = (cbn + 1) * self.batch_size
if self.pad and endv > self.data_length:
startv = self.data_length - self.batch_size
endv = startv + self.batch_size
for k in self.data_dict:
if self.shuffle:
data[k] = self.data_dict[k][self.perm[startv: endv]]
else:
data[k] = self.data_dict[k][startv: endv]
return data
class TF_Optimizer(object):
"""Make the tensorflow SGD-style optimizer into a scikit-learn compatible class
Uses BatchReader for stochastically getting data batches.
model_func: function which returns tensorflow nodes for
predictions, data_input
loss_func: function which takes model_func prediction output node and
returns tensorflow nodes for
loss, label_input
optimizer_class: which tensorflow optimizer class to when learning the model parameters
batch_size: which batch size to use in training
train_iterations: how many iterations to run the optimizer for
--> this should really be picked automatically by like when the training
error plateaus
model_kwargs: dictionary of additional arguments for the model_func
loss_kwargs: dictionary of additional arguments for the loss_func
optimizer_args, optimizer_kwargs: additional position and keyword args for the
optimizer class
sess: tf session to use (will be constructed if not passed)
train_shuffle: whether to shuffle example order during training
"""
def __init__(self,
model_func,
loss_func,
optimizer_class,
batch_size,
train_iterations,
model_kwargs=None,
loss_kwargs=None,
optimizer_args=(),
optimizer_kwargs=None,
sess=None,
train_shuffle=False
):
self.model_func = model_func
if model_kwargs is None:
model_kwargs = {}
self.model_kwargs = model_kwargs
self.loss_func = loss_func
if loss_kwargs is None:
loss_kwargs = {}
self.loss_kwargs = loss_kwargs
self.train_shuffle=train_shuffle
self.train_iterations = train_iterations
self.batch_size = batch_size
if sess is None:
sess = tf.Session()
self.sess = sess
if optimizer_kwargs is None:
optimizer_kwargs = {}
self.optimizer = optimizer_class(*optimizer_args,
**optimizer_kwargs)
def fit(self, train_data, train_labels):
self.model, self.data_holder = self.model_func(**self.model_kwargs)
self.loss, self.labels_holder = self.loss_func(self.model, **self.loss_kwargs)
self.optimizer_op = self.optimizer.minimize(self.loss)
data_dict = {self.data_holder: train_data,
self.labels_holder: train_labels}
train_data = BatchReader(data_dict=data_dict,
batch_size=self.batch_size,
shuffle=self.train_shuffle,
shuffle_seed=0,
pad=True)
init_op = tf.global_variables_initializer()
sess.run(init_op)
self.losses = []
for i in range(self.train_iterations):
data_batch = train_data.next()
output = self.sess.run({'opt': self.optimizer_op,
'loss': self.loss},
feed_dict=data_batch)
self.losses.append(output['loss'])
def predict(self, test_data):
data_dict = {self.data_holder: test_data}
test_data = BatchReader(data_dict=data_dict,
batch_size=self.batch_size,
shuffle=False,
pad=False)
preds = []
for i in range(test_data.total_batches):
data_batch = test_data.get_batch(i)
pred_batch = self.sess.run(self.model, feed_dict=data_batch)
preds.append(pred_batch)
return np.row_stack(preds)
def binarize_labels(labels):
"""takes discrete-valued labels and binarizes them into {-1, 1}-value format
returns:
binarized_labels: of shape (num_stimuli, num_categories)
unique_labels: actual labels indicating order of first axis in binarized_labels
"""
unique_labels = np.unique(labels)
num_classes = len(unique_labels)
binarized_labels = np.array([2 * (labels == c) - 1 for
c in unique_labels]).T.astype(int)
return binarized_labels, unique_labels
class TF_OVA_Classifier(TF_Optimizer):
"""
Subclass of TFOptimizer for use with categorizers. Basically, this class
handles data binarization (in the fit method) and un-binarization
(in the predict method), so that we can use the class with the function:
train_and_test_scikit_classifier
that we've previously defined.
The predict method here implements a one-vs-all approach for multi-class problems.
"""
def fit(self, train_data, train_labels):
#binarize labels
num_features = train_data.shape[1]
binarized_labels, classes_ = binarize_labels(train_labels)
#set .classes_ attribute, since this is needed by train_and_test_scikit_classifier
self.classes_ = classes_
num_classes = len(classes_)
#pass number of features and classes to the model construction
#function that will be called when the fit method is called
self.model_kwargs['num_features'] = num_features
self.model_kwargs['num_classes'] = num_classes
#now actually call the optimizer fit method
TF_Optimizer.fit(self, train_data=train_data,
train_labels=binarized_labels)
def decision_function(self, test_data):
#returns what are effectively the margins (for a linear classifier)
return TF_Optimizer.predict(self, test_data)
def predict(self, test_data):
#use the one-vs-all rule for multiclass prediction.
preds = self.decision_function(test_data)
preds = np.argmax(preds, axis=1)
classes_ = self.classes_
return classes_[preds]
def linear_classifier(num_features, num_classes):
"""generic form of a linear classifier, e.g. the model
margins = np.dot(data, weight) + bias
"""
initial_weights = tf.zeros(shape=(num_features,
num_classes),
dtype=tf.float32)
weights = tf.Variable(initial_weights,
dtype=tf.float32,
name='weights')
initial_bias = tf.zeros(shape=(num_classes,))
bias = tf.Variable(initial_bias,
dtype=tf.float32,
name='bias')
data = tf.placeholder(shape=(None, num_features), dtype=tf.float32, name='data')
margins = tf.add(tf.matmul(data, weights), bias, name='margins')
return margins, data
def hinge_loss(margins):
"""standard SVM hinge loss
"""
num_classes = margins.shape.as_list()[1]
category_labels = tf.placeholder(shape=(None, num_classes),
dtype=tf.float32,
name='labels')
h = tf.maximum(0., 1. - category_labels * margins, name='hinge_loss')
hinge_loss_mean = tf.reduce_mean(h, name='hinge_loss_mean')
return hinge_loss_mean, category_labels
#construct the classifier instance ... just like with scikit-learn
cls = TF_OVA_Classifier(model_func=linear_classifier,
loss_func=hinge_loss,
batch_size=2500,
train_iterations=1000,
train_shuffle=True,
optimizer_class=tf.train.MomentumOptimizer,
optimizer_kwargs = {'learning_rate':10.,
'momentum': 0.99
},
sess=sess
)
#ok let's try out our classifier on medium-variation data
data_subset = Neural_Data[var_levels=='V3']
categories_subset = categories[var_levels=='V3']
cls.fit(data_subset, categories_subset)
plt.plot(cls.losses)
plt.xlabel('number of iterations')
plt.ylabel('Hinge loss')
#ok how good was the actual training accuracy?
preds = cls.predict(data_subset)
acc = (preds == categories_subset).sum()
pct = acc / float(len(preds)) * 100
print('Training accuracy was %.2f%%' % pct)
```
#### Side note on getting relevant tensors
```
#here's the linear mode constructed above:
lin_model = cls.model
print(lin_model)
#suppose we want to access the weights / bias used in this model?
#these can be accessed by the "op.inputs" attribute in TF
#first, we see that this is the stage of the caluation
#where the linear model (the margins) is put together by adding
#the result of the matrix multiplication ("MatMul_[somenumber]")
#to the bias
list(lin_model.op.inputs)
#so bias is just the first of these inputs
bias_tensor = lin_model.op.inputs[1]
bias_tensor
#if we follow up the calculation graph by taking apart
#whatever was the inputs to the matmul stage, we see
#the data and the weights
matmul_tensor = lin_model.op.inputs[0]
list(matmul_tensor.op.inputs)
#so the weights tensor is just the first of *these* inputs
weights_tensor = matmul_tensor.op.inputs[1]
weights_tensor
#putting this together, we could have done:
weights_tensor = lin_model.op.inputs[0].op.inputs[1]
weights_tensor
```
#### Regularization
```
#we can define other loss functions -- such as L2 regularization
def hinge_loss_l2reg(margins, C, square=False):
#starts off the same as regular hinge loss
num_classes = margins.shape.as_list()[1]
category_labels = tf.placeholder(shape=(None, num_classes),
dtype=tf.float32,
name='labels')
h = tf.maximum(0., 1 - category_labels * margins)
#allows for squaring the hinge_loss optionally, as done in sklearn
if square:
h = h**2
hinge_loss = tf.reduce_mean(h)
#but how let's get the weights from the margins,
#using the method just explored above
weights = margins.op.inputs[0].op.inputs[1]
#and get sum-square of the weights -- the 0.5 is for historical reasons
reg_loss = 0.5*tf.reduce_mean(weights**2)
#total up the loss from the two terms with constant C for weighting
total_loss = C * hinge_loss + reg_loss
return total_loss, category_labels
cls = TF_OVA_Classifier(model_func=linear_classifier,
loss_func=hinge_loss_l2reg,
loss_kwargs={'C':1},
batch_size=2500,
train_iterations=1000,
train_shuffle=True,
optimizer_class=tf.train.MomentumOptimizer,
optimizer_kwargs = {'learning_rate':10.,
'momentum': 0.99
},
sess=sess,
)
data_subset = Neural_Data[var_levels=='V3']
categories_subset = categories[var_levels=='V3']
cls.fit(data_subset, categories_subset)
plt.plot(cls.losses)
plt.xlabel('number of iterations')
plt.ylabel('Regularized Hinge loss')
preds = cls.predict(data_subset)
acc = (preds == categories_subset).sum()
pct = acc / float(len(preds)) * 100
print('Regularized training accuracy was %.2f%%' % pct)
#unsuprisingly training accuracy goes down a bit with regularization
#compared to before w/o regularization
```
### Integrating with cross validation tools
```
import cross_validation as cv
meta_array = np.core.records.fromarrays(Ventral_Dataset['image_meta'].values(),
names=Ventral_Dataset['image_meta'].keys())
#the whole point of creating the TF_OVA_Classifier above
#was that we could simply stick it into the cross-validation regime
#that we'd previously set up for scikit-learn style classifiers
#so now let's test it out
#create some train/test splits
splits = cv.get_splits(meta_array,
lambda x: x['object_name'], #we're balancing splits by object
5,
5,
35,
train_filter=lambda x: (x['variation_level'] == 'V3'),
test_filter=lambda x: (x['variation_level'] == 'V3'),)
#here are the arguments to the classifier
model_args = {'model_func': linear_classifier,
'loss_func': hinge_loss_l2reg,
'loss_kwargs': {'C':5e-2, #<-- a good regularization value
},
'batch_size': 2500,
'train_iterations': 1000, #<-- about the right number of steps
'train_shuffle': True,
'optimizer_class':tf.train.MomentumOptimizer,
'optimizer_kwargs': {'learning_rate':.1,
'momentum': 0.9},
'sess': sess}
#so now it should work just like before
res = cv.train_and_test_scikit_classifier(features=Neural_Data,
labels=categories,
splits=splits,
model_class=TF_OVA_Classifier,
model_args=model_args)
#yep!
res[0]['test']['mean_accuracy']
#### Logistic Regression with Softmax loss
def softmax_loss_l2reg(margins, C):
"""this shows how to write softmax logistic regression
using tensorflow
"""
num_classes = margins.shape.as_list()[1]
category_labels = tf.placeholder(shape=(None, num_classes),
dtype=tf.float32,
name='labels')
#get the softmax from the margins
probs = tf.nn.softmax(margins)
#extract just the prob value for the correct category
#(we have the (cats + 1)/2 thing because the category_labels
#come in as {-1, +1} values but we need {0,1} for this purpose)
probs_cat_vec = probs * ((category_labels + 1.) / 2.)
#sum up over categories (actually only one term, that for
#the correct category, contributes on each row)
probs_cat = tf.reduce_mean(probs_cat_vec, axis=1)
#-log
neglogprob = -tf.log(probs_cat)
#average over the batch
log_loss = tf.reduce_mean(neglogprob)
weights = cls.model.op.inputs[0].op.inputs[1]
reg_loss = 0.5*tf.reduce_mean(tf.square(weights))
total_loss = C * log_loss + reg_loss
return total_loss, category_labels
model_args={'model_func': linear_classifier,
'model_kwargs': {},
'loss_func': softmax_loss_l2reg,
'loss_kwargs': {'C': 5e-3},
'batch_size': 2500,
'train_iterations': 1000,
'train_shuffle': True,
'optimizer_class':tf.train.MomentumOptimizer,
'optimizer_kwargs': {'learning_rate': 1.,
'momentum': 0.9
},
'sess': sess}
res = cv.train_and_test_scikit_classifier(features=Neural_Data,
labels=categories,
splits=splits,
model_class=TF_OVA_Classifier,
model_args=model_args)
res[0]['test']['mean_accuracy']
#ok works reasonably well
```
|
github_jupyter
|
## Problem Statement
An experimental drug was tested on 2100 individual in a clinical trial. The ages of participants ranged from thirteen to hundred. Half of the participants were under the age of 65 years old, the other half were 65 years or older.
Ninety five percent patients that were 65 years or older experienced side effects. Ninety five percent patients under 65 years experienced no side effects.
You have to build a program that takes the age of a participant as input and predicts whether this patient has suffered from a side effect or not.
Steps:
• Generate a random dataset that adheres to these statements
• Divide the dataset into Training (90%) and Validation (10%) set
• Build a Simple Sequential Model
• Train and Validate the Model on the dataset
• Randomly choose 20% data from dataset as Test set
• Plot predictions made by the Model on Test set
## Generating Dataset
```
import numpy as np
from random import randint
from sklearn.utils import shuffle
from sklearn.preprocessing import MinMaxScaler
train_labels = [] # one means side effect experienced, zero means no side effect experienced
train_samples = []
for i in range(50):
# The 5% of younger individuals who did experience side effects
random_younger = randint(13, 64)
train_samples.append(random_younger)
train_labels.append(1)
# The 5% of older individuals who did not experience side effects
random_older = randint(65, 100)
train_samples.append(random_older)
train_labels.append(0)
for i in range(1000):
# The 95% of younger individuals who did not experience side effects
random_younger = randint(13, 64)
train_samples.append(random_younger)
train_labels.append(0)
# The 95% of older individuals who did experience side effects
random_older = randint(65, 100)
train_samples.append(random_older)
train_labels.append(1)
train_labels = np.array(train_labels)
train_samples = np.array(train_samples)
train_labels, train_samples = shuffle(train_labels, train_samples) # randomly shuffles each individual array, removing any order imposed on the data set during the creation process
scaler = MinMaxScaler(feature_range = (0, 1)) # specifying scale (range: 0 to 1)
scaled_train_samples = scaler.fit_transform(train_samples.reshape(-1,1)) # transforms our data scale (range: 13 to 100) into the one specified above (range: 0 to 1), we use the reshape fucntion as fit_transform doesnot accept 1-D data by default hence we need to reshape accordingly here
```
## Building a Sequential Model
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import categorical_crossentropy
model = Sequential([
Dense(units = 16, input_shape = (1,), activation = 'relu'),
Dense(units = 32, activation = 'relu'),
Dense(units = 2, activation = 'softmax')
])
model.summary()
```
## Training the Model
```
model.compile(optimizer = Adam(learning_rate = 0.0001), loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
model.fit(x = scaled_train_samples, y = train_labels, validation_split = 0.1, batch_size = 10, epochs = 30, shuffle = True, verbose = 2)
```
## Preprocessing Test Data
```
test_labels = []
test_samples = []
for i in range(10):
# The 5% of younger individuals who did experience side effects
random_younger = randint(13, 64)
test_samples.append(random_younger)
test_labels.append(1)
# The 5% of older individuals who did not experience side effects
random_older = randint(65, 100)
test_samples.append(random_older)
test_labels.append(0)
for i in range(200):
# The 95% of younger individuals who did not experience side effects
random_younger = randint(13, 64)
test_samples.append(random_younger)
test_labels.append(0)
# The 95% of older individuals who did experience side effects
random_older = randint(65, 100)
test_samples.append(random_older)
test_labels.append(1)
test_labels = np.array(test_labels)
test_samples = np.array(test_samples)
test_labels, test_samples = shuffle(test_labels, test_samples)
scaled_test_samples = scaler.fit_transform(test_samples.reshape(-1,1))
```
## Testing the Model using Predictions
```
predictions = model.predict(x = scaled_test_samples, batch_size = 10, verbose = 0)
rounded_predictions = np.argmax(predictions, axis = -1)
```
## Preparing Confusion Matrix
```
from sklearn.metrics import confusion_matrix
import itertools
import matplotlib.pyplot as plt
cm = confusion_matrix(y_true = test_labels, y_pred = rounded_predictions)
# This function has been taken from the website of scikit Learn. link: https://scikit-learn.org/0.18/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
```
## Plotting Predictions using Confusion Matrix
```
cm_plot_labels = ['no_side_effects', 'had_side_effects']
plot_confusion_matrix(cm = cm, classes = cm_plot_labels, title = 'Confusion Matrix')
```
|
github_jupyter
|
# Riemannian Optimisation with Pymanopt for Inference in MoG models
The Mixture of Gaussians (MoG) model assumes that datapoints $\mathbf{x}_i\in\mathbb{R}^d$ follow a distribution described by the following probability density function:
$p(\mathbf{x}) = \sum_{m=1}^M \pi_m p_\mathcal{N}(\mathbf{x};\mathbf{\mu}_m,\mathbf{\Sigma}_m)$ where $\pi_m$ is the probability that the data point belongs to the $m^\text{th}$ mixture component and $p_\mathcal{N}(\mathbf{x};\mathbf{\mu}_m,\mathbf{\Sigma}_m)$ is the probability density function of a multivariate Gaussian distribution with mean $\mathbf{\mu}_m \in \mathbb{R}^d$ and psd covariance matrix $\mathbf{\Sigma}_m \in \{\mathbf{M}\in\mathbb{R}^{d\times d}: \mathbf{M}\succeq 0\}$.
As an example consider the mixture of three Gaussians with means
$\mathbf{\mu}_1 = \begin{bmatrix} -4 \\ 1 \end{bmatrix}$,
$\mathbf{\mu}_2 = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$ and
$\mathbf{\mu}_3 = \begin{bmatrix} 2 \\ -1 \end{bmatrix}$, covariances
$\mathbf{\Sigma}_1 = \begin{bmatrix} 3 & 0 \\ 0 & 1 \end{bmatrix}$,
$\mathbf{\Sigma}_2 = \begin{bmatrix} 1 & 1 \\ 1 & 3 \end{bmatrix}$ and
$\mathbf{\Sigma}_3 = \begin{bmatrix} 0.5 & 0 \\ 0 & 0.5 \end{bmatrix}$
and mixture probability vector $\boldsymbol{\pi}=\left[0.1, 0.6, 0.3\right]^\top$.
Let's generate $N=1000$ samples of that MoG model and scatter plot the samples:
```
import autograd.numpy as np
np.set_printoptions(precision=2)
import matplotlib.pyplot as plt
%matplotlib inline
# Number of data points
N = 1000
# Dimension of each data point
D = 2
# Number of clusters
K = 3
pi = [0.1, 0.6, 0.3]
mu = [np.array([-4, 1]), np.array([0, 0]), np.array([2, -1])]
Sigma = [np.array([[3, 0],[0, 1]]), np.array([[1, 1.], [1, 3]]), 0.5 * np.eye(2)]
components = np.random.choice(K, size=N, p=pi)
samples = np.zeros((N, D))
# For each component, generate all needed samples
for k in range(K):
# indices of current component in X
indices = k == components
# number of those occurrences
n_k = indices.sum()
if n_k > 0:
samples[indices, :] = np.random.multivariate_normal(mu[k], Sigma[k], n_k)
colors = ['r', 'g', 'b', 'c', 'm']
for k in range(K):
indices = k == components
plt.scatter(samples[indices, 0], samples[indices, 1], alpha=0.4, color=colors[k%K])
plt.axis('equal')
plt.show()
```
Given a data sample the de facto standard method to infer the parameters is the [expectation maximisation](https://en.wikipedia.org/wiki/Expectation-maximization_algorithm) (EM) algorithm that, in alternating so-called E and M steps, maximises the log-likelihood of the data.
In [arXiv:1506.07677](http://arxiv.org/pdf/1506.07677v1.pdf) Hosseini and Sra propose Riemannian optimisation as a powerful counterpart to EM. Importantly, they introduce a reparameterisation that leaves local optima of the log-likelihood unchanged while resulting in a geodesically convex optimisation problem over a product manifold $\prod_{m=1}^M\mathcal{PD}^{(d+1)\times(d+1)}$ of manifolds of $(d+1)\times(d+1)$ symmetric positive definite matrices.
The proposed method is on par with EM and shows less variability in running times.
The reparameterised optimisation problem for augmented data points $\mathbf{y}_i=[\mathbf{x}_i^\top, 1]^\top$ can be stated as follows:
$$\min_{(\mathbf{S}_1, ..., \mathbf{S}_m, \boldsymbol{\nu}) \in \mathcal{D}}
-\sum_{n=1}^N\log\left(
\sum_{m=1}^M \frac{\exp(\nu_m)}{\sum_{k=1}^M\exp(\nu_k)}
q_\mathcal{N}(\mathbf{y}_n;\mathbf{S}_m)
\right)$$
where
* $\mathcal{D} := \left(\prod_{m=1}^M \mathcal{PD}^{(d+1)\times(d+1)}\right)\times\mathbb{R}^{M-1}$ is the search space
* $\mathcal{PD}^{(d+1)\times(d+1)}$ is the manifold of symmetric positive definite
$(d+1)\times(d+1)$ matrices
* $\mathcal{\nu}_m = \log\left(\frac{\alpha_m}{\alpha_M}\right), \ m=1, ..., M-1$ and $\nu_M=0$
* $q_\mathcal{N}(\mathbf{y}_n;\mathbf{S}_m) =
2\pi\exp\left(\frac{1}{2}\right)
|\operatorname{det}(\mathbf{S}_m)|^{-\frac{1}{2}}(2\pi)^{-\frac{d+1}{2}}
\exp\left(-\frac{1}{2}\mathbf{y}_i^\top\mathbf{S}_m^{-1}\mathbf{y}_i\right)$
**Optimisation problems like this can easily be solved using Pymanopt – even without the need to differentiate the cost function manually!**
So let's infer the parameters of our toy example by Riemannian optimisation using Pymanopt:
```
import sys
sys.path.insert(0,"../..")
import autograd.numpy as np
from autograd.scipy.special import logsumexp
import pymanopt
from pymanopt.manifolds import Product, Euclidean, SymmetricPositiveDefinite
from pymanopt import Problem
from pymanopt.solvers import SteepestDescent
# (1) Instantiate the manifold
manifold = Product([SymmetricPositiveDefinite(D+1, k=K), Euclidean(K-1)])
# (2) Define cost function
# The parameters must be contained in a list theta.
@pymanopt.function.Autograd
def cost(S, v):
# Unpack parameters
nu = np.append(v, 0)
logdetS = np.expand_dims(np.linalg.slogdet(S)[1], 1)
y = np.concatenate([samples.T, np.ones((1, N))], axis=0)
# Calculate log_q
y = np.expand_dims(y, 0)
# 'Probability' of y belonging to each cluster
log_q = -0.5 * (np.sum(y * np.linalg.solve(S, y), axis=1) + logdetS)
alpha = np.exp(nu)
alpha = alpha / np.sum(alpha)
alpha = np.expand_dims(alpha, 1)
loglikvec = logsumexp(np.log(alpha) + log_q, axis=0)
return -np.sum(loglikvec)
problem = Problem(manifold=manifold, cost=cost, verbosity=1)
# (3) Instantiate a Pymanopt solver
solver = SteepestDescent()
# let Pymanopt do the rest
Xopt = solver.solve(problem)
```
Once Pymanopt has finished the optimisation we can obtain the inferred parameters as follows:
```
mu1hat = Xopt[0][0][0:2,2:3]
Sigma1hat = Xopt[0][0][:2, :2] - mu1hat.dot(mu1hat.T)
mu2hat = Xopt[0][1][0:2,2:3]
Sigma2hat = Xopt[0][1][:2, :2] - mu2hat.dot(mu2hat.T)
mu3hat = Xopt[0][2][0:2,2:3]
Sigma3hat = Xopt[0][2][:2, :2] - mu3hat.dot(mu3hat.T)
pihat = np.exp(np.concatenate([Xopt[1], [0]], axis=0))
pihat = pihat / np.sum(pihat)
```
And convince ourselves that the inferred parameters are close to the ground truth parameters.
The ground truth parameters $\mathbf{\mu}_1, \mathbf{\Sigma}_1, \mathbf{\mu}_2, \mathbf{\Sigma}_2, \mathbf{\mu}_3, \mathbf{\Sigma}_3, \pi_1, \pi_2, \pi_3$:
```
print(mu[0])
print(Sigma[0])
print(mu[1])
print(Sigma[1])
print(mu[2])
print(Sigma[2])
print(pi[0])
print(pi[1])
print(pi[2])
```
And the inferred parameters $\hat{\mathbf{\mu}}_1, \hat{\mathbf{\Sigma}}_1, \hat{\mathbf{\mu}}_2, \hat{\mathbf{\Sigma}}_2, \hat{\mathbf{\mu}}_3, \hat{\mathbf{\Sigma}}_3, \hat{\pi}_1, \hat{\pi}_2, \hat{\pi}_3$:
```
print(mu1hat)
print(Sigma1hat)
print(mu2hat)
print(Sigma2hat)
print(mu3hat)
print(Sigma3hat)
print(pihat[0])
print(pihat[1])
print(pihat[2])
```
Et voilà – this was a brief demonstration of how to do inference for MoG models by performing Manifold optimisation using Pymanopt.
## When Things Go Astray
A well-known problem when fitting parameters of a MoG model is that one Gaussian may collapse onto a single data point resulting in singular covariance matrices (cf. e.g. p. 434 in Bishop, C. M. "Pattern Recognition and Machine Learning." 2001). This problem can be avoided by the following heuristic: if a component's covariance matrix is close to being singular we reset its mean and covariance matrix. Using Pymanopt this can be accomplished by using an appropriate line search rule (based on [LineSearchBackTracking](https://github.com/pymanopt/pymanopt/blob/master/pymanopt/solvers/linesearch.py)) -- here we demonstrate this approach:
```
class LineSearchMoG:
"""
Back-tracking line-search that checks for close to singular matrices.
"""
def __init__(self, contraction_factor=.5, optimism=2,
suff_decr=1e-4, maxiter=25, initial_stepsize=1):
self.contraction_factor = contraction_factor
self.optimism = optimism
self.suff_decr = suff_decr
self.maxiter = maxiter
self.initial_stepsize = initial_stepsize
self._oldf0 = None
def search(self, objective, manifold, x, d, f0, df0):
"""
Function to perform backtracking line-search.
Arguments:
- objective
objective function to optimise
- manifold
manifold to optimise over
- x
starting point on the manifold
- d
tangent vector at x (descent direction)
- df0
directional derivative at x along d
Returns:
- stepsize
norm of the vector retracted to reach newx from x
- newx
next iterate suggested by the line-search
"""
# Compute the norm of the search direction
norm_d = manifold.norm(x, d)
if self._oldf0 is not None:
# Pick initial step size based on where we were last time.
alpha = 2 * (f0 - self._oldf0) / df0
# Look a little further
alpha *= self.optimism
else:
alpha = self.initial_stepsize / norm_d
alpha = float(alpha)
# Make the chosen step and compute the cost there.
newx, newf, reset = self._newxnewf(x, alpha * d, objective, manifold)
step_count = 1
# Backtrack while the Armijo criterion is not satisfied
while (newf > f0 + self.suff_decr * alpha * df0 and
step_count <= self.maxiter and
not reset):
# Reduce the step size
alpha = self.contraction_factor * alpha
# and look closer down the line
newx, newf, reset = self._newxnewf(x, alpha * d, objective, manifold)
step_count = step_count + 1
# If we got here without obtaining a decrease, we reject the step.
if newf > f0 and not reset:
alpha = 0
newx = x
stepsize = alpha * norm_d
self._oldf0 = f0
return stepsize, newx
def _newxnewf(self, x, d, objective, manifold):
newx = manifold.retr(x, d)
try:
newf = objective(newx)
except np.linalg.LinAlgError:
replace = np.asarray([np.linalg.matrix_rank(newx[0][k, :, :]) != newx[0][0, :, :].shape[0]
for k in range(newx[0].shape[0])])
x[0][replace, :, :] = manifold.rand()[0][replace, :, :]
return x, objective(x), True
return newx, newf, False
```
|
github_jupyter
|
```
import os
import csv
import platform
import pandas as pd
import networkx as nx
from graph_partitioning import GraphPartitioning, utils
run_metrics = True
cols = ["WASTE", "CUT RATIO", "EDGES CUT", "TOTAL COMM VOLUME", "Qds", "CONDUCTANCE", "MAXPERM", "NMI", "FSCORE", "FSCORE RELABEL IMPROVEMENT", "LONELINESS"]
pwd = %pwd
config = {
"DATA_FILENAME": os.path.join(pwd, "data", "predition_model_tests", "network", "rand_edge_weights", "network_1.txt"),
#"DATA_FILENAME": os.path.join(pwd, "data", "predition_model_tests", "network", "network_1.txt"),
"OUTPUT_DIRECTORY": os.path.join(pwd, "output"),
# Set which algorithm is run for the PREDICTION MODEL.
# Either: 'FENNEL' or 'SCOTCH'
"PREDICTION_MODEL_ALGORITHM": "SCOTCH",
# Alternativly, read input file for prediction model.
# Set to empty to generate prediction model using algorithm value above.
"PREDICTION_MODEL": "",
"PARTITIONER_ALGORITHM": "SCOTCH",
# File containing simulated arrivals. This is used in simulating nodes
# arriving at the shelter. Nodes represented by line number; value of
# 1 represents a node as arrived; value of 0 represents the node as not
# arrived or needing a shelter.
"SIMULATED_ARRIVAL_FILE": os.path.join(pwd,
"data",
"predition_model_tests",
"dataset_1_shift_rotate",
"simulated_arrival_list",
"percentage_of_prediction_correct_100",
"arrival_100_1.txt"
),
# File containing the prediction of a node arriving. This is different to the
# simulated arrivals, the values in this file are known before the disaster.
"PREDICTION_LIST_FILE": os.path.join(pwd,
"data",
"predition_model_tests",
"dataset_1_shift_rotate",
"prediction_list",
"prediction_1.txt"
),
# File containing the geographic location of each node, in "x,y" format.
"POPULATION_LOCATION_FILE": os.path.join(pwd,
"data",
"predition_model_tests",
"coordinates",
"coordinates_1.txt"
),
# Number of shelters
"num_partitions": 4,
# The number of iterations when making prediction model
"num_iterations": 1,
# Percentage of prediction model to use before discarding
# When set to 0, prediction model is discarded, useful for one-shot
"prediction_model_cut_off": 1.0,
# Alpha value used in one-shot (when restream_batches set to 1)
"one_shot_alpha": 0.5,
# Number of arrivals to batch before recalculating alpha and restreaming.
# When set to 1, one-shot is used with alpha value from above
"restream_batches": 1000,
# When the batch size is reached: if set to True, each node is assigned
# individually as first in first out. If set to False, the entire batch
# is processed and empty before working on the next batch.
"sliding_window": False,
# Create virtual nodes based on prediction model
"use_virtual_nodes": False,
# Virtual nodes: edge weight
"virtual_edge_weight": 1.0,
# Loneliness score parameter. Used when scoring a partition by how many
# lonely nodes exist.
"loneliness_score_param": 1.2,
####
# GRAPH MODIFICATION FUNCTIONS
# Also enables the edge calculation function.
"graph_modification_functions": True,
# If set, the node weight is set to 100 if the node arrives at the shelter,
# otherwise the node is removed from the graph.
"alter_arrived_node_weight_to_100": False,
# Uses generalized additive models from R to generate prediction of nodes not
# arrived. This sets the node weight on unarrived nodes the the prediction
# given by a GAM.
# Needs POPULATION_LOCATION_FILE to be set.
"alter_node_weight_to_gam_prediction": False,
# Enables edge expansion when graph_modification_functions is set to true
"edge_expansion_enabled": True,
# The value of 'k' used in the GAM will be the number of nodes arrived until
# it reaches this max value.
"gam_k_value": 100,
# Alter the edge weight for nodes that haven't arrived. This is a way to
# de-emphasise the prediction model for the unknown nodes.
"prediction_model_emphasis": 1.0,
# This applies the prediction_list_file node weights onto the nodes in the graph
# when the prediction model is being computed and then removes the weights
# for the cutoff and batch arrival modes
"apply_prediction_model_weights": True,
"SCOTCH_LIB_PATH": os.path.join(pwd, "libs/scotch/macOS/libscotch.dylib")
if 'Darwin' in platform.system()
else "/usr/local/lib/libscotch.so",
# Path to the PaToH shared library
"PATOH_LIB_PATH": os.path.join(pwd, "libs/patoh/lib/macOS/libpatoh.dylib")
if 'Darwin' in platform.system()
else os.path.join(pwd, "libs/patoh/lib/linux/libpatoh.so"),
"PATOH_ITERATIONS": 10,
# Expansion modes: 'no_expansion', 'avg_node_weight', 'total_node_weight', 'smallest_node_weight'
# 'largest_node_weight', 'product_node_weight'
# add '_squared' or '_sqrt' at the end of any of the above for ^2 or sqrt(weight)
# add '_complete' for applying the complete algorithm
# for hyperedge with weights: A, B, C, D
# new weights are computed
# (A*B)^2 = H0
# (A*C)^2 = H1, ... Hn-1
# then normal hyperedge expansion computed on H0...Hn-1
# i.e. 'avg_node_weight_squared
"PATOH_HYPEREDGE_EXPANSION_MODE": 'total_node_weight_sqrt_complete',
# Alters how much information to print. Keep it at 1 for this notebook.
# 0 - will print nothing, useful for batch operations.
# 1 - prints basic information on assignments and operations.
# 2 - prints more information as it batches arrivals.
"verbose": 1
}
#gp = GraphPartitioning(config)
# Optional: shuffle the order of nodes arriving
# Arrival order should not be shuffled if using GAM to alter node weights
#random.shuffle(gp.arrival_order)
%pylab inline
import scipy
iterations = 1000
#modes = ['product_node_weight_complete_sqrt']
modes = ['no_expansion', 'avg_node_weight_complete', 'total_node_weight_complete', 'smallest_node_weight_complete','largest_node_weight_complete']
#modes = ['no_expansion']
for mode in modes:
metricsDataPrediction = []
metricsDataAssign = []
dataQdsOv = []
dataCondOv = []
config['PATOH_HYPEREDGE_EXPANSION_MODE'] = mode
print('Mode', mode)
for i in range(0, iterations):
if (i % 50) == 0:
print('Mode', mode, 'Iteration', str(i))
config["DATA_FILENAME"] = os.path.join(pwd, "data", "predition_model_tests", "network", "network_" + str(i + 1) + ".txt")
gp = GraphPartitioning(config)
gp.verbose = 0
gp.load_network()
gp.init_partitioner()
m = gp.prediction_model()
metricsDataPrediction.append(m[0])
'''
#write_graph_files
#
gp.metrics_timestamp = datetime.datetime.now().strftime('%H%M%S')
f,_ = os.path.splitext(os.path.basename(gp.DATA_FILENAME))
gp.metrics_filename = f + "-" + gp.metrics_timestamp
if not os.path.exists(gp.OUTPUT_DIRECTORY):
os.makedirs(gp.OUTPUT_DIRECTORY)
if not os.path.exists(os.path.join(gp.OUTPUT_DIRECTORY, 'oslom')):
os.makedirs(os.path.join(gp.OUTPUT_DIRECTORY, 'oslom'))
file_oslom = os.path.join(gp.OUTPUT_DIRECTORY, 'oslom', "{}-all".format(gp.metrics_filename) + '-edges-oslom.txt')
with open(file_oslom, "w") as outf:
for e in gp.G.edges_iter(data=True):
outf.write("{}\t{}\t{}\n".format(e[0], e[1], e[2]["weight"]))
#file_oslom = utils.write_graph_files(gp.OUTPUT_DIRECTORY,
# "{}-all".format(gp.metrics_filename),
# gp.G,
# quiet=True)
community_metrics = utils.run_community_metrics(gp.OUTPUT_DIRECTORY,
"{}-all".format(gp.metrics_filename),
file_oslom)
dataQdsOv.append(float(community_metrics['Qds']))
dataCondOv.append(float(community_metrics['conductance']))
'''
ec = ''
tcv = ''
qds = ''
conductance = ''
maxperm = ''
nmi = ''
lonliness = ''
qdsOv = ''
condOv = ''
dataEC = []
dataTCV = []
dataQDS = []
dataCOND = []
dataMAXPERM = []
dataNMI = []
dataLonliness = []
for i in range(0, iterations):
dataEC.append(metricsDataPrediction[i][2])
dataTCV.append(metricsDataPrediction[i][3])
dataQDS.append(metricsDataPrediction[i][4])
dataCOND.append(metricsDataPrediction[i][5])
dataMAXPERM.append(metricsDataPrediction[i][6])
dataNMI.append(metricsDataPrediction[i][7])
dataLonliness.append(metricsDataPrediction[i][10])
# UNCOMMENT FOR BATCH ARRIVAL
#dataECB.append(metricsDataAssign[i][2])
#dataTCVB.append(metricsDataAssign[i][3])
if(len(ec)):
ec = ec + ','
ec = ec + str(metricsDataPrediction[i][2])
if(len(tcv)):
tcv = tcv + ','
tcv = tcv + str(metricsDataPrediction[i][3])
if(len(qds)):
qds = qds + ','
qds = qds + str(metricsDataPrediction[i][4])
if(len(conductance)):
conductance = conductance + ','
conductance = conductance + str(metricsDataPrediction[i][5])
if(len(maxperm)):
maxperm = maxperm + ','
maxperm = maxperm + str(metricsDataPrediction[i][6])
if(len(nmi)):
nmi = nmi + ','
nmi = nmi + str(metricsDataPrediction[i][7])
if(len(lonliness)):
lonliness = lonliness + ','
lonliness = lonliness + str(dataLonliness[i])
'''
if(len(qdsOv)):
qdsOv = qdsOv + ','
qdsOv = qdsOv + str(dataQdsOv[i])
if(len(condOv)):
condOv = condOv + ','
condOv = condOv + str(dataCondOv[i])
'''
ec = 'EC_PM,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataEC)) + ',' + str(scipy.std(dataEC)) + ',' + ec
tcv = 'TCV_PM,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataTCV)) + ',' + str(scipy.std(dataTCV)) + ',' + tcv
lonliness = "LONELINESS," + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataLonliness)) + ',' + str(scipy.std(dataLonliness)) + ',' + lonliness
qds = 'QDS_PM,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataQDS)) + ',' + str(scipy.std(dataQDS)) + ',' + qds
conductance = 'CONDUCTANCE_PM,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataCOND)) + ',' + str(scipy.std(dataCOND)) + ',' + conductance
maxperm = 'MAXPERM_PM,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataMAXPERM)) + ',' + str(scipy.std(dataMAXPERM)) + ',' + maxperm
nmi = 'NMI_PM,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataNMI)) + ',' + str(scipy.std(dataNMI)) + ',' + nmi
#qdsOv = 'QDS_OV,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataQdsOv)) + ',' + str(scipy.std(dataQdsOv)) + qdsOv
#condOv = 'CONDUCTANCE_OV,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataCondOv)) + ',' + str(scipy.std(dataCondOv)) + condOv
print(ec)
print(tcv)
print(lonliness)
print(qds)
print(conductance)
print(maxperm)
#print(qdsOv)
#print(condOv)
```
|
github_jupyter
|
```
# Dataset from here
# https://archive.ics.uci.edu/ml/datasets/Adult
import great_expectations as ge
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
%matplotlib inline
"""
age: continuous.
workclass: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked.
fnlwgt: continuous.
education: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool.
education-num: continuous.
marital-status: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse.
occupation: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces.
relationship: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried.
race: White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black.
sex: Female, Male.
capital-gain: continuous.
capital-loss: continuous.
hours-per-week: continuous.
native-country: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands.
"""
categorical_columns = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
continuous_columns = ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
df = ge.read_csv('../data/adult.data.b_2_train.csv')
df_test = ge.read_csv('../data/adult.data.b_2_test.csv')
df.head()
df.shape
df.expect_column_values_to_be_in_set('sex', ['Female', 'Male'])
def strip_spaces(df):
for column in df.columns:
if isinstance(df[column][0], str):
df[column] = df[column].apply(str.strip)
strip_spaces(df)
strip_spaces(df_test)
df.expect_column_values_to_be_in_set('sex', ['Female', 'Male'])
df['y'] = df['<=50k'].apply(lambda x: 0 if (x == '<=50K') else 1)
df_test['y'] = df_test['<=50k'].apply(lambda x: 0 if (x == '<=50K') else 1)
df['sex'].value_counts().plot(kind='bar')
sex_partition = ge.dataset.util.categorical_partition_data(df['sex'])
df.expect_column_chisquare_test_p_value_to_be_greater_than('sex', sex_partition)
df_test.expect_column_chisquare_test_p_value_to_be_greater_than('sex', sex_partition, output_format='SUMMARY')
plt.hist(df['age'])
age_partition = ge.dataset.util.continuous_partition_data(df['age'])
df.expect_column_bootstrapped_ks_test_p_value_to_be_greater_than('age', age_partition)
out = df_test.expect_column_bootstrapped_ks_test_p_value_to_be_greater_than('age', age_partition, output_format='SUMMARY')
print(out)
plt.plot(out['summary_obj']['expected_cdf']['x'], out['summary_obj']['expected_cdf']['cdf_values'])
plt.plot(out['summary_obj']['observed_cdf']['x'], out['summary_obj']['observed_cdf']['cdf_values'])
plt.plot(out['summary_obj']['expected_partition']['bins'][1:], out['summary_obj']['expected_partition']['weights'])
plt.plot(out['summary_obj']['observed_partition']['bins'][1:], out['summary_obj']['observed_partition']['weights'])
df['<=50k'].value_counts().plot(kind='bar')
df['education'].value_counts().plot(kind='bar')
education_partition = ge.dataset.util.categorical_partition_data(df['education'])
df.expect_column_chisquare_test_p_value_to_be_greater_than('education', education_partition)
df_test['education'].value_counts().plot(kind='bar')
df_test.expect_column_chisquare_test_p_value_to_be_greater_than('education', education_partition)
df_test.expect_column_kl_divergence_to_be_less_than('education', education_partition, threshold=0.1)
plt.hist(df['education-num'])
education_num_partition_auto = ge.dataset.util.continuous_partition_data(df['education-num'])
df.expect_column_bootstrapped_ks_test_p_value_to_be_greater_than('education-num', education_num_partition_auto)
education_num_partition_auto
education_num_partition_cat = ge.dataset.util.categorical_partition_data(df['education-num'])
df.expect_column_chisquare_test_p_value_to_be_greater_than('education-num', education_num_partition_cat)
df_test.expect_column_chisquare_test_p_value_to_be_greater_than('education-num', education_num_partition_cat)
education_num_partition = ge.dataset.util.continuous_partition_data(df['education-num'], bins='uniform', n_bins=10)
df.expect_column_bootstrapped_ks_test_p_value_to_be_greater_than('education-num', education_num_partition)
s1 = df['education'][df['y'] == 1].value_counts()
s1.name = 'education_y_1'
s2 = df['education'][df['y'] == 0].value_counts()
s2.name = 'education_y_0'
plotter = pd.concat([s1, s2], axis=1)
p1 = plt.bar(range(len(plotter)), plotter['education_y_0'])
p2 = plt.bar(range(len(plotter)), plotter['education_y_1'], bottom=plotter['education_y_0'])
plt.xticks(range(len(plotter)), plotter.index, rotation='vertical')
plt.show()
df.get_expectation_suite()
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn.ensemble import RandomForestClassifier
def build_transformer(df_train):
le = {}
ohe = OneHotEncoder()
X_cat = pd.DataFrame()
for cat_column in categorical_columns:
le[cat_column] = LabelEncoder()
X_cat[cat_column + '_le'] = le[cat_column].fit_transform(df_train[cat_column])
X_cat = ohe.fit_transform(X_cat)
X_train = np.append(X_cat.toarray(), df_train[continuous_columns], axis=1)
return le, ohe, X_train
def apply_transformer(le, ohe, df_test):
X_cat = pd.DataFrame()
for cat_column in categorical_columns:
X_cat[cat_column + '_le'] = le[cat_column].transform(df_test[cat_column])
X_cat = ohe.transform(X_cat)
X_test = np.append(X_cat.toarray(), df_test[continuous_columns], axis=1)
return X_test
clf = RandomForestClassifier()
le, ohe, X_train = build_transformer(df)
clf.fit(X_train, df['y'])
clf.score(X_train, df['y'])
my_expectations = df.get_expectation_suite()
my_expectations
results = df_test.validate(expectation_suite=my_expectations)
results
failures = df_test.validate(expectation_suite=my_expectations, only_return_failures=True)
failures
X_test = apply_transformer(le, ohe, df_test)
clf.score(X_test, df_test['y'])
df_test_2 = ge.read_csv('../data/adult.data.b_1_train.csv')
strip_spaces(df_test_2)
#df_test_2 = df_test_2[df_test_2['native-country'] != 'Holand-Netherlands']
df_test_2['y'] = df_test_2['<=50k'].apply(lambda x: 0 if (x == '<=50K') else 1)
X_test_2 = apply_transformer(le, ohe, df_test_2)
clf.score(X_test_2, df_test_2['y'])
# Health Screening: Preventative Checkup!
failures = df_test_2.validate(my_expectations, only_return_failures=True, output_format='SUMMARY')
failures
df_test_2['sex'].value_counts().plot(kind='bar')
df_test_3 = ge.read_csv('../data/adult.data.b_1_test.csv')
strip_spaces(df_test_3)
#df_test_3 = df_test_3[df_test_3['native-country'] != 'Holand-Netherlands']
df_test_3['y'] = df_test_3['<=50k'].apply(lambda x: 0 if (x == '<=50K') else 1)
X_test_3 = apply_transformer(le, ohe, df_test_3)
clf.score(X_test_3, df_test_3['y'])
#What could have gone wrong?
#
# a. The world changed.
# b. New sensor means different data.
# c. Bueller? Bueller?
# d. Biased sample of the data
result = df_test_2.validate(my_expectations, only_return_failures=True, output_format='SUMMARY')
failures
```
|
github_jupyter
|
# Nothing But NumPy: A 3-layer Binary Classification Neural Network on Iris Flowers
Part of the blog ["Nothing but NumPy: Understanding & Creating Binary Classification Neural Networks with Computational Graphs from Scratch"](https://medium.com/@rafayak/nothing-but-numpy-understanding-creating-binary-classification-neural-networks-with-e746423c8d5c)- by [Rafay Khan](https://twitter.com/RafayAK)
In this notebook we'll create a 3-layer neural network (i.e. one input one, one hidden layer and one output layer) and train it on Iris dataset using _only_ **sepals** as input features to classify **Iris-virginica vs. others**
First, let's import NumPy, our neural net layers, the Binary Cross-Entropy(bce) Cost function and helper functions.
_Feel free to look into the helper functions in the utils directory._
```
import numpy as np
from Layers.LinearLayer import LinearLayer
from Layers.ActivationLayer import SigmoidLayer
from util.utilities import *
from util.cost_functions import compute_stable_bce_cost
import matplotlib.pyplot as plt
# to show all the generated plots inline in the notebook
%matplotlib inline
```

For convenience we'll load the data through [scikit-learn](https://scikit-learn.org/stable/index.html#).
If you don't have it installed please refer to this [link](https://scikit-learn.org/stable/install.html)
```
# load data from scikit-learn's datasets module
from sklearn.datasets import load_iris
iris = load_iris() # returns a python dictionary with the dataset
```
Let's see what the dataset contains:
```
list(iris.keys())
```
- **data**: contains the 4 features of each example in a row, has 150 rows
- **target**: contains the label for each example _(0->setosa, 1->versicolor, 2->virginica)_
- **target_names**: contains the names of each target label
- **DESCR**: contains the desription of the dataset
- **feature_names**: contains the names of the 4 features(sepal length, sepal width, petal length, petal width)
- **filename** : where the file is located on the computer
Let's explore the data:
```
iris.data.shape # rows(examples), cols(features)
iris.target.shape # labels for 150 flowers
iris.target_names # print the name of the 3 labels(species) an example could belong to
iris.feature_names # name of each feature in data's columns
iris.data[:5, :] # print first 5 examples from the Iris dataset
iris.target[:5] # print labels for the first 5 examples in the Iris dataset
```
So, the data of the **first** 5 examples looks as follows:
| exmaple# | sepal length (cm) | sepal width (cm) | petal length (cm) | petal width (cm) | target | target name|
| --- | --- | --- || --- | --- | --- |
| 0 | 5.1 | 3.5 | 1.4 | 0.2| 0| setosa
| 1 |4.9| 3. | 1.4| 0.2|0| setosa
| 2 |4.7| 3.2| 1.3| 0.2|0| setosa
| 3 |4.6| 3.1| 1.5| 0.2|0| setosa
| 4 |5. | 3.6| 1.4| 0.2|0| setosa
For our model we will only use **sepal length and sepal width** to classify whether the Iris flower is _virginica_ or _other_
```
# take only sepal length(0th col) and sepal width(1st col)
X = iris.data[:, :2]
# fix the labes shape so that instead of (150,) its (150,1),
# helps avoiding weird broadcasting errors
Y = (iris.target).reshape((150, 1))
X.shape
Y.shape
```
**Notice** in the table above that the first 5 examples belong to __'setosa'__ species, this pattern continues in the dataset(the pattern is all _setosa_ examples followed by _versicolor_ examples and finally _virginica_ examples). ___A good practice is to randomize the data before training a neural network, so that the neural network does not, by accident, learn a trivial ordering pattern in the data.___
So let's randomize the data
```
np.random.seed(48) # for reproducible randomization
random_indices = np.random.permutation(len(X)) # genrate random indices
X_train = X[random_indices]
Y_train = Y[random_indices]
```
Now let's again print the first 5 examples and see the results(note this time there are only two features - _sepal lenght_, _sepal width_ )
```
X_train[:5, :]
Y_train[:5]
```
Now, the data of the **first** 5 examples looks as follows:
| exmaple# | sepal length (cm) | sepal width (cm) | target | target name|
| --- | --- | --- || --- |
| 0 | 5.7| 2.9| 1| versicolor
| 1 | 6.1| 2.8| 1| versicolor
| 2 | 6.1| 2.6| 2| virginica
| 3 | 4.5| 2.3| 0| setosa
| 4 | 5.9| 3.2| 1| versicolor
Finally, let's put the training set(`X_train`) & and labels(`Y_train`) in the correct shape `(feat, examples)` and `(examples,1)`, respectively. Also we'll make the target label ___virginica=1___ and the rest ___0___.
```
# Transpose the data so that it's in the correct shape
# for passing through neural network
# also binarize the classes viginica=1 and the rest 0
X_train = X_train.T
Y_train = Y_train.T
Y_train = (Y_train==2).astype('int') # uses bool logic to binarize labels, wherever label=2 output True(1) rest Flase(0)
print("Shape of training data, X_train: {}".format(X_train.shape))
print("Shape of labels, Y_train: {}".format(Y_train.shape))
Y_train[:, :5] # print first five examples
```
Before training the neural net let's visulaize the data:
```
cmap = matplotlib.colors.ListedColormap(["red", "green"], name='from_list', N=None)
# scattter plot
scatter = plt.scatter(X_train.T[:, 0], X_train.T[:, 1],
s=200, c=np.squeeze(Y_train.T),
marker='x', cmap=cmap) # s-> size of marker
plt.xlabel('sepal lenght', size=20)
plt.ylabel('sepal width', size=20)
plt.legend(scatter.legend_elements()[0], ['others', 'virginica'])
plt.show()
```
### Notice that this data is very tough to classify perfectly, as many of the data points are intertwined( i.e some green and red points are too close to each other)
***
***
#### Now we are ready to setup and train the Neural Network
This is the neural net architecture we'll use

```
# define training constants
learning_rate = 1
number_of_epochs = 10000
np.random.seed(48) # set seed value so that the results are reproduceable
# (weights will now be initailzaed to the same pseudo-random numbers, each time)
# Our network architecture has the shape:
# (input)--> [Linear->Sigmoid] -> [Linear->Sigmoid] -> [Linear->Sigmoid] -->(output)
#------ LAYER-1 ----- define 1st hidden layer that takes in training data
Z1 = LinearLayer(input_shape=X_train.shape, n_out=5, ini_type='xavier')
A1 = SigmoidLayer(Z1.Z.shape)
#------ LAYER-2 ----- define 2nd hidden layer that takes in values from 1st-hidden layer
Z2= LinearLayer(input_shape=A1.A.shape, n_out= 3, ini_type='xavier')
A2= SigmoidLayer(Z2.Z.shape)
#------ LAYER-3 ----- define output layer that takes in values from 2nd-hidden layer
Z3= LinearLayer(input_shape=A2.A.shape, n_out=1, ini_type='xavier')
A3= SigmoidLayer(Z3.Z.shape)
```
Now we can start the training loop:
```
costs = [] # initially empty list, this will store all the costs after a certian number of epochs
# Start training
for epoch in range(number_of_epochs):
# ------------------------- forward-prop -------------------------
Z1.forward(X_train)
A1.forward(Z1.Z)
Z2.forward(A1.A)
A2.forward(Z2.Z)
Z3.forward(A2.A)
A3.forward(Z3.Z)
# ---------------------- Compute Cost ----------------------------
cost, dZ3 = compute_stable_bce_cost(Y=Y_train, Z=Z3.Z)
# print and store Costs every 100 iterations and of the last iteration.
if (epoch % 100) == 0 or epoch == number_of_epochs - 1:
print("Cost at epoch#{}: {}".format(epoch, cost))
costs.append(cost)
# ------------------------- back-prop ----------------------------
Z3.backward(dZ3)
A2.backward(Z3.dA_prev)
Z2.backward(A2.dZ)
A1.backward(Z2.dA_prev)
Z1.backward(A1.dZ)
# ----------------------- Update weights and bias ----------------
Z3.update_params(learning_rate=learning_rate)
Z2.update_params(learning_rate=learning_rate)
Z1.update_params(learning_rate=learning_rate)
```
Now let's see how well the neural net peforms on the training data after the training as finished
`predict` helper functionin the cell below returns three things:
* `p`: predicted labels (output 1 if predictded output is greater than classification threshold `thresh`)
* `probas`: raw probabilities (how sure the neural net thinks the output is 1, this is just `P_hat`)
* `accuracy`: the number of correct predictions from total predictions
```
classifcation_thresh = 0.5
predicted_outputs, p_hat, accuracy = predict(X=X_train, Y=Y_train,
Zs=[Z1, Z2, Z3], As=[A1, A2, A3], thresh=classifcation_thresh)
print("The predicted outputs of first 5 examples: \n{}".format(predicted_outputs[:,:5]))
print("The predicted prbabilities of first 5 examples:\n {}".format(np.round(p_hat[:, :5], decimals=3)) )
print("\nThe accuracy of the model is: {}%".format(accuracy))
```
#### The Learning Curve
```
plot_learning_curve(costs, learning_rate, total_epochs=number_of_epochs)
```
#### The Decision Boundary
```
plot_decision_boundary(lambda x: predict_dec(Zs=[Z1, Z2, Z3], As=[A1, A2, A3], X=x.T, thresh=classifcation_thresh),
X=X_train.T, Y=Y_train)
```
#### The Shaded Decision Boundary
```
plot_decision_boundary_shaded(lambda x: predict_dec(Zs=[Z1, Z2, Z3], As=[A1, A1, A3], X=x.T, thresh=classifcation_thresh),
X=X_train.T, Y=Y_train)
```
## Bounus
Train this dataset using only a 1-layer or 2-layer neural network
_(Hint: works slightly better)_
|
github_jupyter
|
```
import san
from src_end2end import statistical_features
import lsa_features
import pickle
import numpy as np
from tqdm import tqdm
import pandas as pd
import os
import skopt
from skopt import gp_minimize
from sklearn import preprocessing
from skopt.space import Real, Integer, Categorical
from skopt.utils import use_named_args
from sklearn.metrics import f1_score
st_models = ["roberta-large-nli-stsb-mean-tokens", "xlm-r-large-en-ko-nli-ststb", "distilbert-base-nli-mean-tokens"]
from sentence_transformers import SentenceTransformer
st_models = ["roberta-large-nli-stsb-mean-tokens", "xlm-r-large-en-ko-nli-ststb", "distilbert-base-nli-mean-tokens"]
def embedd_bert(text, st_model = 'paraphrase-distilroberta-base-v1', split = 'train'):
paths = "temp_berts/"+st_model+"_"+split+'.pkl'
if os.path.isfile(paths):
sentence_embeddings = pickle.load(open(paths,'rb'))
return sentence_embeddings
model = SentenceTransformer(st_model)
sentence_embeddings = model.encode(text)
with open(paths, 'wb') as f:
pickle.dump(sentence_embeddings, f)
return sentence_embeddings
from sentence_transformers import SentenceTransformer
st_models = ["roberta-large-nli-stsb-mean-tokens", "xlm-r-large-en-ko-nli-ststb", "distilbert-base-nli-mean-tokens"]
def embedd_bert2(text, st_model = 'paraphrase-distilroberta-base-v1'):
text = [t[:512] for t in text]
model = SentenceTransformer(st_model)
sentence_embeddings = model.encode(text)
return sentence_embeddings
def export_kgs(dataset):
path = "representations/"+dataset+"/"
for split in ["train", "dev", "test"]:
for kg in ["complex", "transe", "quate", "simple", "rotate", "distmult"]:
path_tmp = path + split + "/" + kg + ".csv"
tmp_kg = prep_kgs(kg, split)
tmp_kg = np.array((tmp_kg))
np.savetxt(path_tmp, tmp_kg, delimiter=",")
def prep_kgs(kg_emb, split='train'):
embs = []
global dataset
path_in = "kg_emb_dump/"+dataset+"/"+split+"_"+kg_emb+'_n.pkl'
with open(path_in, "rb") as f:
kgs_p = pickle.load(f)
for x,y in kgs_p:
embs.append(y)
return embs
def export_kgs_spec(dataset):
path = "representations/"+dataset+"/"
for split in ["train", "dev", "test"]:
for kg in ["complex", "transe", "quate", "simple", "rotate", "distmult"]:
path_tmp = path + split + "/" + kg + "_entity.csv"
tmp_kg = prep_kgs2(kg, split)
tmp_kg = np.array((tmp_kg))
np.savetxt(path_tmp, tmp_kg, delimiter=",")
def export_LM(dataset):
texts = {}
ys = {}
path = "representations/"+dataset+"/"
for thing in ["train", "dev", "test"]:
path_in = "data/final/"+dataset+"/"+thing+'.csv'
df = pd.read_csv(path_in, encoding='utf-8')
texts[thing] = df.text_a.to_list()
#ys[thing] = df.label.to_list()
staticstical = statistical_features.fit_space(texts[thing])
kg = 'stat'
path_tmp = path + thing + "/" + kg + ".csv"
np.savetxt(path_tmp, staticstical, delimiter=",")
bertz = embedd_bert2(texts[thing], st_models[0])
kg = st_models[0]
path_tmp = path + thing + "/" + kg + ".csv"
np.savetxt(path_tmp, bertz, delimiter=",")
bertz2 = embedd_bert2(texts[thing], st_models[1])
kg = st_models[1]
path_tmp = path + thing + "/" + kg + ".csv"
np.savetxt(path_tmp, bertz2, delimiter=",")
bertz3 = embedd_bert2(texts[thing], st_models[2])
kg = st_models[2]
path_tmp = path + thing + "/" + kg + ".csv"
np.savetxt(path_tmp, bertz3, delimiter=",")
for dataset in tqdm(["pan2020", "AAAI2021_COVID19_fake_news", "LIAR_PANTS", "ISOT", "FakeNewsNet"]):
path = "representations/"+dataset+"/"
for thing in ["train", "dev", "test"]:
path_in = "data/final/"+dataset+"/"+thing+'.csv'
df = pd.read_csv(path_in, encoding='utf-8')
if dataset == "pan2020":
ys = df.labels.to_list()
else:
ys = df.label.to_list()
path_tmp = path + thing + "/" + "_ys.csv"
np.savetxt(path_tmp, ys, delimiter=",")
def prep_kgs2(kg_emb, split='train'):
embs = []
global dataset
path_in = "kg_emb_dump/"+dataset+"/"+split+"_"+kg_emb+'_speakers.pkl'
with open(path_in, "rb") as f:
kgs_p = pickle.load(f)
for x,y in kgs_p:
embs.append(y)
return embs
from tqdm import tqdm
for dataset in tqdm(["LIAR_PANTS", "FakeNewsNet"]):
export_kgs_spec(dataset)
for dataset in tqdm(["pan2020", "AAAI2021_COVID19_fake_news", "LIAR_PANTS", "ISOT", "FakeNewsNet"]):
export_kgs(dataset)
from tqdm import tqdm
for dataset in tqdm(["ISOT", "pan2020"]):#)""LIAR_PANTS","pan2020", "ISOT", "AAAI2021_COVID19_fake_news", "FakeNewsNet"]):
export_LM(dataset)
```
|
github_jupyter
|
# Programming_Assingment17
```
Question1.
Create a function that takes three arguments a, b, c and returns the sum of the
numbers that are evenly divided by c from the range a, b inclusive.
Examples
evenly_divisible(1, 10, 20) ➞ 0
# No number between 1 and 10 can be evenly divided by 20.
evenly_divisible(1, 10, 2) ➞ 30
# 2 + 4 + 6 + 8 + 10 = 30
evenly_divisible(1, 10, 3) ➞ 18
# 3 + 6 + 9 = 18
def sumDivisibles(a, b, c):
sum = 0
for i in range(a, b + 1):
if (i % c == 0):
sum += i
return sum
a = int(input('Enter a : '))
b = int(input('Enter b : '))
c = int(input('Enter c : '))
print(sumDivisibles(a, b, c))
```
### Question2.
Create a function that returns True if a given inequality expression is correct and
False otherwise.
Examples
correct_signs("3 > 7 < 11") ➞ True
correct_signs("13 > 44 > 33 > 1") ➞ False
correct_signs("1 < 2 < 6 < 9 > 3") ➞ True
```
def correct_signs ( txt ) :
return eval ( txt )
print(correct_signs("3 > 7 < 11"))
print(correct_signs("13 > 44 > 33 > 1"))
print(correct_signs("1 < 2 < 6 < 9 > 3"))
```
### Question3.
Create a function that replaces all the vowels in a string with a specified character.
Examples
replace_vowels('the aardvark, '#') ➞ 'th# ##rdv#rk'
replace_vowels('minnie mouse', '?') ➞ 'm?nn?? m??s?'
replace_vowels('shakespeare', '*') ➞ 'sh*k*sp**r*'
```
def replace_vowels(str, s):
vowels = 'AEIOUaeiou'
for ele in vowels:
str = str.replace(ele, s)
return str
input_str = input("enter a string : ")
s = input("enter a vowel replacing string : ")
print("\nGiven Sting:", input_str)
print("Given Specified Character:", s)
print("Afer replacing vowels with the specified character:",replace_vowels(input_str, s))
```
### Question4.
Write a function that calculates the factorial of a number recursively.
Examples
factorial(5) ➞ 120
factorial(3) ➞ 6
factorial(1) ➞ 1
factorial(0) ➞ 1
```
def factorial(n):
if n == 0:
return 1
return n * factorial(n-1)
num = int(input('enter a number :'))
print("Factorial of", num, "is", factorial(num))
```
### Question 5
Hamming distance is the number of characters that differ between two strings.
To illustrate:
String1: 'abcbba'
String2: 'abcbda'
Hamming Distance: 1 - 'b' vs. 'd' is the only difference.
Create a function that computes the hamming distance between two strings.
Examples
hamming_distance('abcde', 'bcdef') ➞ 5
hamming_distance('abcde', 'abcde') ➞ 0
hamming_distance('strong', 'strung') ➞ 1
```
def hamming_distance(str1, str2):
i = 0
count = 0
while(i < len(str1)):
if(str1[i] != str2[i]):
count += 1
i += 1
return count
# Driver code
str1 = "abcde"
str2 = "bcdef"
# function call
print(hamming_distance(str1, str2))
print(hamming_distance('strong', 'strung'))
hamming_distance('abcde', 'abcde')
```
|
github_jupyter
|
# Simulating a Predator and Prey Relationship
Without a predator, rabbits will reproduce until they reach the carrying capacity of the land. When coyotes show up, they will eat the rabbits and reproduce until they can't find enough rabbits. We will explore the fluctuations in the two populations over time.
# Using Lotka-Volterra Model
## Part 1: Rabbits without predators
According to [Mother Earth News](https://www.motherearthnews.com/homesteading-and-livestock/rabbits-on-pasture-intensive-grazing-with-bunnies-zbcz1504), a rabbit eats six square feet of pasture per day. Let's assume that our rabbits live in a five acre clearing in a forest: 217,800 square feet/6 square feet = 36,300 rabbit-days worth of food. For simplicity, let's assume the grass grows back in two months. Thus, the carrying capacity of five acres is 36,300/60 = 605 rabbits.
Female rabbits reproduce about six to seven times per year. They have six to ten children in a litter. According to [Wikipedia](https://en.wikipedia.org/wiki/Rabbit), a wild rabbit reaches sexual maturity when it is about six months old and typically lives one to two years. For simplicity, let's assume that in the presence of unlimited food, a rabbit lives forever, is immediately sexually mature, and has 1.5 children every month.
For our purposes, then, let $x_t$ be the number of rabbits in our five acre clearing on month $t$.
$$
\begin{equation*}
R_t = R_{t-1} + 1.5\frac{605 - R_{t-1}}{605} R_{t-1}
\end{equation*}
$$
The formula could be put into general form
$$
\begin{equation*}
R_t = R_{t-1} + growth_{R} \times \big( \frac{capacity_{R} - R_{t-1}}{capacity_{R}} \big) R_{t-1}
\end{equation*}
$$
By doing this, we allow users to interact with growth rate and the capacity value visualize different interaction
```
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
from IPython.display import display, clear_output
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
style = {'description_width': 'initial'}
capacity_R = widgets.FloatText(description="Capacity", value=605)
growth_rate_R = widgets.FloatText(description="Growth rate", value=1.5)
initial_R = widgets.FloatText(description="Initial population",style=style, value=1)
button_R = widgets.Button(description="Plot Graph")
display(initial_R, capacity_R, growth_rate_R, button_R)
def plot_graph_r(b):
print("helo")
clear_output()
display(initial_R, capacity_R, growth_rate_R, button_R)
fig = plt.figure()
ax = fig.add_subplot(111)
t = np.arange(0, 20, 1)
s = np.zeros(t.shape)
R = initial_R.value
for i in range(t.shape[0]):
s[i] = R
R = R + growth_rate_R.value * (capacity_R.value - R)/(capacity_R.value) * R
if R < 0.0:
R = 0.0
ax.plot(t, s)
ax.set(xlabel='time (months)', ylabel='number of rabbits',
title='Rabbits Without Predators')
ax.grid()
button_R.on_click(plot_graph_r)
```
**Exercise 1** (1 point). Complete the following functions, find the number of rabbits at time 5, given $x_0$ = 10, population capcity =100, and growth rate = 0.8
```
R_i = 10
for i in range(5):
R_i = int(R_i + 0.8 * (100 - R_i)/(100) * R_i)
print(f'There are {R_i} rabbits in the system at time 5')
```
## Tweaking the Growth Function
The growth is regulated by this part of the formula:
$$
\begin{equation*}
\frac{capacity_{R} - R_{t-1}}{capacity_{R}}
\end{equation*}
$$
That is, this fraction (and thus growth) goes to zero when the land is at capacity. As the number of rabbits goes to zero, this fraction goes to 1.0, so growth is at its highest speed. We could substitute in another function that has the same values at zero and at capacity, but has a different shape. For example,
$$
\begin{equation*}
\left( \frac{capacity_{R} - R_{t-1}}{capacity_{R}} \right)^{\beta}
\end{equation*}
$$
where $\beta$ is a positive number. For example, if $\beta$ is 1.3, it indicates that the rabbits can sense that food supplies are dwindling and pre-emptively slow their reproduction.
```
#### %matplotlib inline
import math
style = {'description_width': 'initial'}
capacity_R_2 = widgets.FloatText(description="Capacity", value=605)
growth_rate_R_2 = widgets.FloatText(description="Growth rate", value=1.5)
initial_R_2 = widgets.FloatText(description="Initial population",style=style, value=1)
shaping_R_2 = widgets.FloatText(description="Shaping", value=1.3)
button_R_2 = widgets.Button(description="Plot Graph")
display(initial_R_2, capacity_R_2, growth_rate_R_2, shaping_R_2, button_R_2)
def plot_graph_r(b):
clear_output()
display(initial_R_2, capacity_R_2, growth_rate_R_2, shaping_R_2, button_R_2)
fig = plt.figure()
ax = fig.add_subplot(111)
t = np.arange(0, 20, 1)
s = np.zeros(t.shape)
R = initial_R_2.value
beta = float(shaping_R_2.value)
for i in range(t.shape[0]):
s[i] = R
reserve_ratio = (capacity_R_2.value - R)/capacity_R_2.value
if reserve_ratio > 0.0:
R = R + R * growth_rate_R_2.value * reserve_ratio**beta
else:
R = R - R * growth_rate_R_2.value * (-1.0 * reserve_ratio)**beta
if R < 0.0:
R = 0
ax.plot(t, s)
ax.set(xlabel='time (months)', ylabel='number of rabbits',
title='Rabbits Without Predators (Shaped)')
ax.grid()
button_R_2.on_click(plot_graph_r)
```
**Exercise 2** (1 point). Repeat Exercise 1, with $\beta$ = 1.5 Complete the following functions, find the number of rabbits at time 5. Should we expect to see more rabbits or less?
```
R_i = 10
b=1.5
for i in range(5):
R_i = int(R_i + 0.8 * ((100 - R_i)/(100))**b * R_i)
print(f'There are {R_i} rabbits in the system at time 5, less rabbits compare to exercise 1, where beta = 1')
```
## Part 2: Coyotes without Prey
According to [Huntwise](https://www.besthuntingtimes.com/blog/2020/2/3/why-you-should-coyote-hunt-how-to-get-started), coyotes need to consume about 2-3 pounds of food per day. Their diet is 90 percent mammalian. The perfect adult cottontail rabbits weigh 2.6 pounds on average. Thus, we assume the coyote eats one rabbit per day.
For coyotes, the breeding season is in February and March. According to [Wikipedia](https://en.wikipedia.org/wiki/Coyote#Social_and_reproductive_behaviors), females have a gestation period of 63 days, with an average litter size of 6, though the number fluctuates depending on coyote population density and the abundance of food. By fall, the pups are old enough to hunt for themselves.
In the absence of rabbits, the number of coyotes will drop, as their food supply is scarce.
The formula could be put into general form:
$$
\begin{align*}
C_t & \sim (1 - death_{C}) \times C_{t-1}\\
&= C_{t-1} - death_{C} \times C_{t-1}
\end{align*}
$$
```
%matplotlib inline
style = {'description_width': 'initial'}
initial_C=widgets.FloatText(description="Initial Population",style=style,value=200.0)
declining_rate_C=widgets.FloatText(description="Death rate",value=0.5)
button_C=widgets.Button(description="Plot Graph")
display(initial_C, declining_rate_C, button_C)
def plot_graph_c(b):
clear_output()
display(initial_C, declining_rate_C, button_C)
fig = plt.figure()
ax = fig.add_subplot(111)
t1 = np.arange(0, 20, 1)
s1 = np.zeros(t1.shape)
C = initial_C.value
for i in range(t1.shape[0]):
s1[i] = C
C = (1 - declining_rate_C.value)*C
ax.plot(t1, s1)
ax.set(xlabel='time (months)', ylabel='number of coyotes',
title='Coyotes Without Prey')
ax.grid()
button_C.on_click(plot_graph_c)
```
**Exercise 3** (1 point). Assume the system has 100 coyotes at time 0, the death rate is 0.5 if there are no prey. At what point in time, coyotes become extinct.
```
ti = 0
coyotes_init = 100
c_i = coyotes_init
d_r = 0.5
while c_i > 10:
c_i= int((1 - d_r)*c_i)
ti =ti + 1
print(f'At time t={ti}, the coyotes become extinct')
```
## Part 3: Interaction Between Coyotes and Rabbit
With the simple interaction from the first two parts, now we can combine both interaction and come out with simple interaction.
$$
\begin{align*}
R_t &= R_{t-1} + growth_{R} \times \big( \frac{capacity_{R} - R_{t-1}}{capacity_{R}} \big) R_{t-1} - death_{R}(C_{t-1})\times R_{t-1}\\\\
C_t &= C_{t-1} - death_{C} \times C_{t-1} + growth_{C}(R_{t-1}) \times C_{t-1}
\end{align*}
$$
In equations above, death rate of rabbit is a function parameterized by the amount of coyote. Similarly, the growth rate of coyotes is a function parameterized by the amount of the rabbit.
The death rate of the rabbit should be $0$ if there are no coyotes, while it should approach $1$ if there are many coyotes. One of the formula fulfilling this characteristics is hyperbolic function.
$$
\begin{equation}
death_R(C) = 1 - \frac{1}{xC + 1}
\end{equation}
$$
where $x$ determines how quickly $death_R$ increases as the number of coyotes ($C$) increases. Similarly, the growth rate of the coyotes should be $0$ if there are no rabbits, while it should approach infinity if there are many rabbits. One of the formula fulfilling this characteristics is a linear function.
$$
\begin{equation}
growth_C(R) = yC
\end{equation}
$$
where $y$ determines how quickly $growth_C$ increases as number of rabbit ($R$) increases.
Putting all together, the final equtions are
$$
\begin{align*}
R_t &= R_{t-1} + growth_{R} \times \big( \frac{capacity_{R} - R_{t-1}}{capacity_{R}} \big) R_{t-1} - \big( 1 - \frac{1}{xC_{t-1} + 1} \big)\times R_{t-1}\\\\
C_t &= C_{t-1} - death_{C} \times C_{t-1} + yR_{t-1}C_{t-1}
\end{align*}
$$
**Exercise 4** (3 point). The model we have created above is a variation of the Lotka-Volterra model, which describes various forms of predator-prey interactions. Complete the following functions, which should generate the state variables plotted over time. Blue = prey, Orange = predators.
```
%matplotlib inline
initial_rabbit = widgets.FloatText(description="Initial Rabbit",style=style, value=1)
initial_coyote = widgets.FloatText(description="Initial Coyote",style=style, value=1)
capacity = widgets.FloatText(description="Capacity rabbits", style=style,value=5)
growth_rate = widgets.FloatText(description="Growth rate rabbits", style=style,value=1)
death_rate = widgets.FloatText(description="Death rate coyotes", style=style,value=1)
x = widgets.FloatText(description="Death rate ratio due to coyote",style=style, value=1)
y = widgets.FloatText(description="Growth rate ratio due to rabbit",style=style, value=1)
button = widgets.Button(description="Plot Graph")
display(initial_rabbit, initial_coyote, capacity, growth_rate, death_rate, x, y, button)
def plot_graph(b):
clear_output()
display(initial_rabbit, initial_coyote, capacity, growth_rate, death_rate, x, y, button)
fig = plt.figure()
ax = fig.add_subplot(111)
t = np.arange(0, 20, 0.5)
s = np.zeros(t.shape)
p = np.zeros(t.shape)
R = initial_rabbit.value
C = initial_coyote.value
for i in range(t.shape[0]):
s[i] = R
p[i] = C
R = R + growth_rate.value * (capacity.value - R)/(capacity.value) * R - (1 - 1/(x.value*C + 1))*R
C = C - death_rate.value * C + y.value*s[i]*C
ax.plot(t, s, label="rabit")
ax.plot(t, p, label="coyote")
ax.set(xlabel='time (months)', ylabel='population size',
title='Coyotes-Rabbit (Predator-Prey) Relationship')
ax.grid()
ax.legend()
button.on_click(plot_graph)
```
The system shows an oscillatory behavior. Let's try to verify the nonlinear oscillation in phase space visualization.
## Part 4: Trajectories and Direction Fields for a system of equations
To further demonstrate the predator numbers rise and fall cyclically with their preferred prey, we will be using the Lotka-Volterra equations, which is based on differential equations. The Lotka-Volterra Prey-Predator model involves two equations, one describes the changes in number of preys and the second one decribes the changes in number of predators. The dynamics of the interaction between a rabbit population $R_t$ and a coyotes $C_t$ is described by the following differential equations:
$$
\begin{align*}
\frac{dR}{dt} = aR_t - bR_tC_t
\end{align*}
$$
$$
\begin{align*}
\frac{dC}{dt} = bdR_tC_t - cC_t
\end{align*}
$$
with the following notations:
R$_t$: number of preys(rabbits)
C$_t$: number of predators(coyotes)
a: natural growing rate of rabbits, when there is no coyotes
b: natural dying rate of rabbits, which is killed by coyotes per unit of time
c: natural dying rate of coyotes, when ther is not rabbits
d: natural growing rate of coyotes with which consumed prey is converted to predator
We start from defining the system of ordinary differential equations, and then find the equilibrium points for our system. Equilibrium occurs when the frowth rate is 0, and we can see that we have two equilibrium points in our example, the first one happens when theres no preys or predators, which represents the extinction of both species, the second equilibrium happens when $R_t=\frac{c}{b d}$ $C_t=\frac{a}{b}$. Move on, we will use the scipy to help us integrate the differential equations, and generate the plot of evolution for both species:
**Exercise 5** (3 point). As we can tell from the simulation results of predator-prey model, the system shows an oscillatory behavior. Find the equilibrium points of the system and generate the phase space visualization to demonstrate the oscillation seen previously is nonlinear with distorted orbits.
```
from scipy import integrate
#using the same input number from the previous example
input_a = widgets.FloatText(description="a",style=style, value=1)
input_b = widgets.FloatText(description="b",style=style, value=1)
input_c = widgets.FloatText(description="c",style=style, value=1)
input_d = widgets.FloatText(description="d",style=style, value=1)
# Define the system of ODEs
# P[0] is prey, P[1] is predator
def dP_dt(P,t=0):
return np.array([a*P[0]-b*P[0]*P[1], d*b*P[0]*P[1]-c*P[1]])
button_draw_trajectories = widgets.Button(description="Plot Graph")
display(input_a, input_b, input_c, input_d, button_draw_trajectories)
def plot_trajectories(graph):
global a, b, c, d, eq1, eq2
clear_output()
display(input_a, input_b, input_c, input_d, button_draw_trajectories)
a = input_a.value
b = input_b.value
c = input_c.value
d = input_d.value
# Define the Equilibrium points
eq1 = np.array([0. , 0.])
eq2 = np.array([c/(d*b),a/b])
values = np.linspace(0.1, 3, 10)
# Colors for each trajectory
vcolors = plt.cm.autumn_r(np.linspace(0.1, 1., len(values)))
f = plt.figure(figsize=(10,6))
t = np.linspace(0, 150, 1000)
for v, col in zip(values, vcolors):
# Starting point
P0 = v*eq2
P = integrate.odeint(dP_dt, P0, t)
plt.plot(P[:,0], P[:,1],
lw= 1.5*v, # Different line width for different trajectories
color=col, label='P0=(%.f, %.f)' % ( P0[0], P0[1]) )
ymax = plt.ylim(bottom=0)[1]
xmax = plt.xlim(left=0)[1]
nb_points = 20
x = np.linspace(0, xmax, nb_points)
y = np.linspace(0, ymax, nb_points)
X1,Y1 = np.meshgrid(x, y)
DX1, DY1 = dP_dt([X1, Y1])
M = (np.hypot(DX1, DY1))
M[M == 0] = 1.
DX1 /= M
DY1 /= M
plt.title('Trajectories and direction fields')
Q = plt.quiver(X1, Y1, DX1, DY1, M, pivot='mid', cmap=plt.cm.plasma)
plt.xlabel('Number of rabbits')
plt.ylabel('Number of coyotes')
plt.legend()
plt.grid()
plt.xlim(0, xmax)
plt.ylim(0, ymax)
print(f"\n\nThe equilibrium pointsof the system are:", list(eq1), list(eq2))
plt.show()
button_draw_trajectories.on_click(plot_trajectories)
```
The model here is described in continuous differential equations, thus there is no jump or intersections between the trajectories.
## Part 5: Multiple Predators and Preys Relationship
The previous relationship could be extended to multiple predators and preys relationship
**Exercise 6** (3 point). Develop a discrete-time mathematical model of four species, and each two of them competing for the same resource, and simulate its behavior. Plot the simulation results.
```
%matplotlib inline
initial_rabbit2 = widgets.FloatText(description="Initial Rabbit", style=style,value=2)
initial_coyote2 = widgets.FloatText(description="Initial Coyote",style=style, value=2)
initial_deer2 = widgets.FloatText(description="Initial Deer", style=style,value=1)
initial_wolf2 = widgets.FloatText(description="Initial Wolf", style=style,value=1)
population_capacity = widgets.FloatText(description="capacity",style=style, value=10)
population_capacity_rabbit = widgets.FloatText(description="capacity rabbit",style=style, value=3)
growth_rate_rabbit = widgets.FloatText(description="growth rate rabbit",style=style, value=1)
death_rate_coyote = widgets.FloatText(description="death rate coyote",style=style, value=1)
growth_rate_deer = widgets.FloatText(description="growth rate deer",style=style, value=1)
death_rate_wolf = widgets.FloatText(description="death rate wolf",style=style, value=1)
x1 = widgets.FloatText(description="death rate ratio due to coyote",style=style, value=1)
y1 = widgets.FloatText(description="growth rate ratio due to rabbit", style=style,value=1)
x2 = widgets.FloatText(description="death rate ratio due to wolf",style=style, value=1)
y2 = widgets.FloatText(description="growth rate ratio due to deer", style=style,value=1)
plot2 = widgets.Button(description="Plot Graph")
display(initial_rabbit2, initial_coyote2,initial_deer2, initial_wolf2, population_capacity,
population_capacity_rabbit, growth_rate_rabbit, growth_rate_deer, death_rate_coyote,death_rate_wolf,
x1, y1,x2, y2, plot2)
def plot_graph(b):
clear_output()
display(initial_rabbit2, initial_coyote2,initial_deer2, initial_wolf2, population_capacity,
population_capacity_rabbit, growth_rate_rabbit, growth_rate_deer, death_rate_coyote,death_rate_wolf,
x1, y1,x2, y2, plot2)
fig = plt.figure()
ax = fig.add_subplot(111)
t_m = np.arange(0, 20, 0.5)
r_m = np.zeros(t_m.shape)
c_m = np.zeros(t_m.shape)
d_m = np.zeros(t_m.shape)
w_m = np.zeros(t_m.shape)
R_m = initial_rabbit2.value
C_m = initial_coyote2.value
D_m = initial_deer2.value
W_m = initial_wolf2.value
population_capacity_deer = population_capacity.value - population_capacity_rabbit.value
for i in range(t_m.shape[0]):
r_m[i] = R_m
c_m[i] = C_m
d_m[i] = D_m
w_m[i] = W_m
R_m = R_m + growth_rate_rabbit.value * (population_capacity_rabbit.value - R_m)\
/(population_capacity_rabbit.value) * R_m - (1 - 1/(x1.value*C_m + 1))*R_m - (1 - 1/(x2.value*W_m + 1))*R_m
D_m = D_m + growth_rate_deer.value * (population_capacity_deer - D_m) \
/(population_capacity_deer) * D_m - (1 - 1/(x1.value*C_m + 1))*D_m - (1 - 1/(x2.value*W_m + 1))*D_m
C_m = C_m - death_rate_coyote.value * C_m + y1.value*r_m[i]*C_m + y2.value*d_m[i]*C_m
W_m = W_m - death_rate_wolf.value * W_m + y1.value*r_m[i]*W_m + y2.value*d_m[i]*W_m
ax.plot(t_m, r_m, label="rabit")
ax.plot(t_m, c_m, label="coyote")
ax.plot(t_m, d_m, label="deer")
ax.plot(t_m, w_m, label="wolf")
ax.set(xlabel='time (months)', ylabel='population',
title='Multiple Predator Prey Relationship')
ax.grid()
ax.legend()
plot2.on_click(plot_graph)
```
|
github_jupyter
|
# Late contributions Received and Made
## Setup
```
%load_ext sql
from django.conf import settings
connection_string = 'postgresql+psycopg2://{USER}:{PASSWORD}@{HOST}:{PORT}/{NAME}'.format(
**settings.DATABASES['default']
)
%sql $connection_string
```
## Unique Composite Key
The documentation says that the records are unique on the following fields:
* `FILING_ID`
* `AMEND_ID`
* `LINE_ITEM`
* `REC_TYPE`
* `FORM_TYPE`
`REC_TYPE` is always the same value: `S497`, so we can ignore this column.
`FORM_TYPE` is either `F497P1` or `F497P2`, indicating in whether itemized transaction is listed under Part 1 (Contributions Received) or Part 2 (Contributions Made). I'll split these up into separate tables.
## Are the `S497_CD` records actually unique on `FILING_ID`, `AMEND_ID` and `LINE_ITEM`?
Yes. And this is even true across the Parts 1 and 2 (Contributions Received and Contributions Made).
```
%%sql
SELECT "FILING_ID", "AMEND_ID", "LINE_ITEM", COUNT(*)
FROM "S497_CD"
GROUP BY 1, 2, 3
HAVING COUNT(*) > 1
ORDER BY COUNT(*) DESC;
```
## `TRAN_ID`
The `S497_CD` table includes a `TRAN_ID` field, which the [documentation](http://calaccess.californiacivicdata.org/documentation/calaccess-files/s497-cd/#fields) describes as a "Permanent value unique to this item".
### Is `TRAN_ID` ever `NULL` or blank?
No.
```
%%sql
SELECT COUNT(*)
FROM "S497_CD"
WHERE "TRAN_ID" IS NULL OR "TRAN_ID" = '' OR "TRAN_ID" = '0';
```
### Is `TRAN_ID` unique across filings?
Decidedly no.
```
%%sql
SELECT "TRAN_ID", COUNT(DISTINCT "FILING_ID")
FROM "S497_CD"
GROUP BY 1
HAVING COUNT(DISTINCT "FILING_ID") > 1
ORDER BY COUNT(DISTINCT "FILING_ID") DESC
LIMIT 100;
```
But `TRAN_ID` does appear to be unique within each filing amendment, and appears to be reused for each filing.
```
%%sql
SELECT "FILING_ID", "TRAN_ID", COUNT(DISTINCT "AMEND_ID") AS amend_count, COUNT(*) AS row_count
FROM "S497_CD"
GROUP BY 1, 2
ORDER BY COUNT(*) DESC
LIMIT 100;
```
There's one exception:
```
%%sql
SELECT "FILING_ID", "TRAN_ID", "AMEND_ID", COUNT(*)
FROM "S497_CD"
GROUP BY 1, 2, 3
HAVING COUNT(*) > 1;
```
Looks like this `TRAN_ID` is duplicated across the two parts of the filing. So it was both a contribution both made and received?
```
%%sql
SELECT *
FROM "S497_CD"
WHERE "FILING_ID" = 2072379
AND "TRAN_ID" = 'EXP9671';
```
Looking at the [PDF for the filing](http://cal-access.ss.ca.gov/PDFGen/pdfgen.prg?filingid=2072379&amendid=1), it appears to be a check from the California Psychological Association PAC to the McCarty for Assembly 2016 committee, which was given and returned on 8/25/2016.
Regardless, because the combinations of `FILING_ID`, `AMEND_ID` and `TRAN_ID` are unique within each part of the Schedule 497, we could substitute `TRAN_ID` for `LINE_ITEM` in the composite key when splitting up the contributions received from the contributions made.
The advantage is that the `TRAN_ID` purportedly points to the same contribution from one amendment to the next, whereas the same `LINE_ITEM` might not because the filers don't necessarily list transactions on the same line from one filing amendment to the next.
Here's an example: On the [original Schedule 497 filing](http://cal-access.ss.ca.gov/PDFGen/pdfgen.prg?filingid=2083478&amendid=0) for Steven Bradford for Senate 2016, a $8,500.00 contribution from an AFL-CIO sub-committee is listed on line 1. But on the [first](http://cal-access.ss.ca.gov/PDFGen/pdfgen.prg?filingid=2083478&amendid=1) and [second](http://cal-access.ss.ca.gov/PDFGen/pdfgen.prg?filingid=2083478&amendid=2) amendments to the filing, it is listed on line 4.
|
github_jupyter
|
## Imports
```
from __future__ import print_function, division
import pandas as pd
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
import patsy
import seaborn as sns
import matplotlib.pyplot as plt
import scipy.stats as stats
%matplotlib inline
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import RidgeCV
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.pipeline import Pipeline
from sklearn import cross_validation
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import cross_val_score
from sklearn.model_selection import KFold
from sklearn.linear_model import Ridge
from sklearn.linear_model import ElasticNet
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNetCV
from sklearn.linear_model import LassoCV
from sklearn.linear_model import RidgeCV
from sklearn.metrics import mean_squared_error as MSE
```
## Reading and preparing the df
```
horsey = pd.read_csv('finalmerged_clean').drop('Unnamed: 0', axis=1)
```
#### Smaller data set (maiden females)
```
MaidenFems = horsey.iloc[42:49]
MaidenFems
```
#### Larger data set (without maiden females)
```
horse_fast = horsey.drop(horsey.index[42:49]).reset_index(drop=True)
horse_fast
horse_fast = horse_fast.drop('Final_Time',1).drop('Horse Name',1)
horse_fast
```
## Splitting into Master Test-Train
```
ttest = horse_fast.iloc[[1,5,10,15,20,25,30,35,40,45,50]].reset_index(drop=True)
ttrain = horse_fast.drop(axis = 0, index = [1,5,10,15,20,25,30,35,40,45,50]).sample(frac=1).reset_index(drop=True)
ttrain
y_ttrain = ttrain['Final_Time_Hund']
y_ttest = ttest['Final_Time_Hund'] #extract dependent variable
X_ttrain = ttrain.drop('Final_Time_Hund',1)
X_ttest = ttest.drop('Final_Time_Hund',1) # Get rid of ind. variables
```
## Testing Assumptions
Didn't complete for sake of time
#### Assumption 1
```
XAssum = X_ttrain
yAssum = y_ttrain
XAssum_train, XAssum_test, yAssum_train, yAssum_test = train_test_split(XAssum, yAssum, test_size=0.2)
def diagnostic_plot(x, y):
plt.figure(figsize=(20,5))
rgr = LinearRegression()
rgr.fit(XAssum_train, yAssum_train)
pred = rgr.predict(XAssum_test, yAssum_test)
#Regression plot
plt.subplot(1, 3, 1)
plt.scatter(XAssum_train,yAssum_train)
plt.plot(XAssum_train, pred, color='blue',linewidth=1)
plt.title("Regression fit")
plt.xlabel("x")
plt.ylabel("y")
#Residual plot (true minus predicted)
plt.subplot(1, 3, 2)
res = yAssum_train - pred
plt.scatter(pred, res)
plt.title("Residual plot")
plt.xlabel("prediction")
plt.ylabel("residuals")
#A Q-Q plot (for the scope of today), it's a percentile, percentile plot. When the predicted and actual distributions
#are the same, they Q-Q plot has a diagonal 45degree line. When stuff diverges, the kertosis between predicted and actual are different,
#your line gets wonky.
plt.subplot(1, 3, 3)
#Generates a probability plot of sample data against the quantiles of a
# specified theoretical distribution
stats.probplot(res, dist="norm", plot=plt)
plt.title("Normal Q-Q plot")
diagnostic_plot(XAssum_train, yAssum_train)
modelA = ElasticNet(1, l1_ratio=.5)
fit = modelA.fit(XAssum_train, yAssum_train)
rsq = fit.score(XAssum_train, yAssum_train)
adj_rsq = 1 - (1-rsq)*(len(yAssum_train)-1)/(len(yAssum_train)-XAssum_train.shape[1]-1)
print(rsq)
print(adj_rsq)
```
#### Assumption 2
```
# develop OLS with Sklearn
X = ttrain[1:]
y = ttrain[0] # predictor
lr = LinearRegression()
fit = lr.fit(X,y)
t['predict']=fit.predict(X)
data['resid']=data.cnt-data.predict
with sns.axes_style('white'):
plot=data.plot(kind='scatter',
x='predict',y='resid',alpha=0.2,figsize=(10,6))
```
## Model 0 - Linear Regression
Working with the training data that doesn't include the maiden-filly race.
```
horsey = ttrain
Xlin = X_ttrain
ylin = y_ttrain
```
#### Regplots
```
sns.regplot('Gender','Final_Time_Hund', data=horsey);
#Makes sense! Male horses tend to be a little faster.
sns.regplot('Firsts','Final_Time_Hund', data=horsey);
#Makes sense! Horses that have won more races tend to be faster.
sns.regplot('Seconds','Final_Time_Hund', data=horsey);
#Similar to the result for "firsts", but slightly less apparent.
sns.regplot('Thirds','Final_Time_Hund', data=horsey);
#Similar to the results above.
sns.regplot('PercentWin','Final_Time_Hund', data=horsey);
#Not a great correlation...
sns.regplot('Starts','Final_Time_Hund', data=horsey);
#This seems pretty uncorrelated...
sns.regplot('Date','Final_Time_Hund', data=horsey);
#Horses with more practice have faster times. But pretty uncorrelated...
sns.regplot('ThreeF','Final_Time_Hund', data=horsey);
#Really no correlation!
sns.regplot('FourF','Final_Time_Hund', data=horsey);
#Huh, not great either.
sns.regplot('FiveF','Final_Time_Hund', data=horsey);
#Slower practice time means slower finaltime. But yeah... pretty uncorrelated...
```
#### Correlations
```
horsey.corr()
%matplotlib inline
import matplotlib
matplotlib.rcParams["figure.figsize"] = (12, 10)
sns.heatmap(horsey.corr(), vmin=-1,vmax=1,annot=True, cmap='seismic');
```
Pretty terrible... but it seems like FiveF, Date, Gender and Percent win are the best... (in that order).
```
sns.pairplot(horsey, size = 1.2, aspect=1.5);
plt.hist(horsey.Final_Time_Hund);
```
#### Linear Regression (All inputs)
```
#Gotta add the constant... without it my r^2 was 1.0!
Xlin = sm.add_constant(Xlin)
#Creating the model
lin_model = sm.OLS(ylin,Xlin)
# Fitting the model to the training set
fit_lin = lin_model.fit()
# Print summary statistics of the model's performance
fit_lin.summary()
```
- r2 could be worse...
- adj r2 also could be worse...
- Inputs that seem significant based on pvalue : Gender... that's about it! The other lowests are Firsts, seconds and date (though they're quite crappy). But I guess if 70% of data lies within the level of confidence... that's better than none...
** TESTING! **
```
Xlin = X_ttrain
ylin = y_ttrain
lr_train = LinearRegression()
lr_fit = lr_train.fit(Xlin, ylin)
r2_training = lr_train.score(Xlin, ylin)
r2adj_training = 1 - (1-r2_training)*(len(ylin)-1)/(len(ylin)-Xlin.shape[1]-1)
preds = lr_fit.predict(X_ttest)
rmse = np.sqrt(MSE(y_ttest, preds))
print('R2:', r2_training)
print('R2 Adjusted:', r2adj_training)
print('Output Predictions', preds)
print('RMSE:', rmse)
```
#### Linear Regression (Updated Inputs)
Below is the best combination of features to drop: Thirds, ThreeF & PrecentWin
```
Xlin2 = Xlin.drop(labels ='Thirds', axis = 1).drop(labels ='ThreeF', axis = 1).drop(labels ='PercentWin', axis = 1)
ylin2 = y_ttrain
#Gotta add the constant... without it my r^2 was 1.0!
Xlin2 = sm.add_constant(Xlin2)
#Creating the model
lin_model = sm.OLS(ylin,Xlin2)
# Fitting the model to the training set
fit_lin = lin_model.fit()
# Print summary statistics of the model's performance
fit_lin.summary()
```
Slightly better...
## Model A - Elastic Net (no frills)
```
## Establishing x and y
XA = X_ttrain
yA = y_ttrain
#Checking the predictability of the model with this alpha = 1
modelA = ElasticNet(1, l1_ratio=.5)
fit = modelA.fit(XA, yA)
rsq = fit.score(XA, yA)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yA)-XA.shape[1]-1)
print(rsq)
print(adj_rsq)
```
** 0.3073 ** not great... but not terrible. 30% of the variance is explained by the model.
```
#Let's see if I play around with the ratios of L1 and L2
modelA = ElasticNet(1, l1_ratio=.2)
fit = modelA.fit(XA, yA)
rsq = fit.score(XA, yA)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yA)-XA.shape[1]-1)
print(rsq)
print(adj_rsq)
```
** Looks slightly worse. I guess there wasn't much need to compress complexity, or fix colinearity. **
```
#Let's check it in the other direction, with L1 getting more weight.
modelA = ElasticNet(1, l1_ratio=.98)
fit = modelA.fit(XA, yA)
rsq = fit.score(XA, yA)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yA)-XA.shape[1]-1)
print(rsq)
print(adj_rsq)
```
** Seems like l1 of 0.98 really takes the cake! Let's check out alpha... Might be worth it to switch to a
Lasso model... something to keep in mind**
```
#Let's see if we can find a better alpha...
kf = KFold(n_splits=5, shuffle = True, random_state = 40 )
alphas = [1e-9,1e-8,1e-7,1e-6,1e-5,1e-4,1e-3,1e-2,1e-1,1,10,100,1000,10000, 100000, 1000000]
#alphas = [0,.001,.01,.1,.2,.5,.9,1,5,10,50,100,1000,10000]
errors = []
for i in alphas:
err_list = []
for train_index, test_index in kf.split(XA):
#print("TRAIN:", train_index, "TEST:", test_index) #This gives the index of the rows you're training and testing.
XA_train, XA_test = XA.loc[train_index], XA.loc[test_index]
yA_train, yA_test = yA[train_index], yA[test_index]
ef = ElasticNet(i, l1_ratio = 0.5)
ef.fit(XA_train,yA_train)
#print(ef.coef_) #This prints the coefficients of each of the input variables.
preds = ef.predict(XA_test) #Predictions for the y value.
error = np.sqrt(MSE(preds,yA_test))
err_list.append(error)
error = np.mean(err_list)
errors.append(error)
print("The RMSE for alpha = {0} is {1}".format(i,error))
```
** Looks like the best alpha is around 1000! Lets see if we can get even more granular. **
```
kf = KFold(n_splits=5, shuffle = True, random_state = 40)
alphas = [500, 600, 800, 900, 1000, 1500, 2000, 3000]
#alphas = [0,.001,.01,.1,.2,.5,.9,1,5,10,50,100,1000,10000]
errors = []
for i in alphas:
err_list = []
for train_index, test_index in kf.split(XA):
#print("TRAIN:", train_index, "TEST:", test_index) #This gives the index of the rows you're training and testing.
XA_train, XA_test = XA.loc[train_index], XA.loc[test_index]
yA_train, yA_test = yA[train_index], yA[test_index]
ef = ElasticNet(i)
ef.fit(XA_train,yA_train)
#print(ef.coef_) #This prints the coefficients of each of the input variables.
preds = ef.predict(XA_test) #Predictions for the y value.
error = np.sqrt(MSE(preds,yA_test))
err_list.append(error)
error = np.mean(err_list)
errors.append(error)
print("The RMSE for alpha = {0} is {1}".format(i,error))
```
** I'm going to settle on an alpha of 800 **
```
#Checking the predictability of the model again with the new alpha of 90.
modelA = ElasticNet(alpha = 800)
fit = modelA.fit(XA, yA)
fit.score(XA, yA)
```
Hm. Not really sure what that did, but definitely didn't work...
** TESTING **
Doing ElasticNetCV (withouth any modifications)
```
## Letting it do it's thing on it's own.
encvA = ElasticNetCV()
fitA = encvA.fit(XA, yA)
r2_training = encvA.score(XA, yA)
y= np.trim_zeros(encvA.fit(XA,yA).coef_)
#r2adj_training = 1 - (1-r2_training)*(XA.shape[1]-1)/(XA.shape[1]-len(y)-1)
adj_rsq = 1 - (1-r2_training)*(len(XA)-1)/(len(XA)-XA.shape[1]-len(y)-1)
preds = fitA.predict(X_ttest)
rmse = np.sqrt(MSE(preds, y_ttest))
print('R2:', r2_training)
print('R2 Adjusted:', adj_rsq)
print('Output Predictions', preds)
print('RMSE:', rmse)
print('Alpha:',encvA.alpha_)
print('L1:',encvA.l1_ratio_)
print('Coefficients:',fitA.coef_)
elastic_coef = encvA.fit(XA, yA).coef_
_ = plt.bar(range(len(XA.columns)), elastic_coef)
_ = plt.xticks(range(len(XA.columns)), XA.columns, rotation=45)
_ = plt.ylabel('Coefficients')
plt.show()
```
Doing ElasticNet CV - changing the l1 ratio
```
encvA2 = ElasticNetCV(l1_ratio = .99)
fitA2 = encvA2.fit(XA, yA)
r2_training = encvA2.score(XA, yA)
y= np.trim_zeros(encvA2.fit(XA,yA).coef_)
adj_rsq = 1 - (1-r2_training)*(len(XA)-1)/(len(XA)-XA.shape[1]-len(y)-1)
preds = fitA2.predict(X_ttest)
rmse = np.sqrt(MSE(y_ttest, preds))
print('R2:', r2_training)
print('R2 Adjusted:', adj_rsq)
print('Output Predictions', preds)
print('RMSE:', rmse)
print('Alpha:',encvA2.alpha_)
print('L1:',encvA2.l1_ratio_)
print('Coefficients:',fitA.coef_)
elastic_coef = encvA2.fit(XA, yA).coef_
_ = plt.bar(range(len(XA.columns)), elastic_coef)
_ = plt.xticks(range(len(XA.columns)), XA.columns, rotation=45)
_ = plt.ylabel('Coefficients')
plt.show()
```
### Extras
```
## L1 is 0.98
encvA2 = ElasticNetCV(l1_ratio = 0.98)
fitA2 = encvA2.fit(XA_train, yA_train)
rsq = fitA2.score(XA_test, yA_test)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yA)-XA.shape[1]-1)
preds = fitA2.predict(XA_test)
mserror = np.sqrt(MSE(preds,yA_test))
print(rsq)
print(adj_rsq)
print(preds)
print(mserror)
print(encvA2.alpha_)
print(encvA2.l1_ratio_)
```
Still weird...
```
## Trying some alphas...
encvA3 = ElasticNetCV(alphas = [80,800,1000])
fitA3 = encvA3.fit(XA_train, yA_train)
rsq = fitA3.score(XA_test, yA_test)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yA)-XA.shape[1]-1)
preds = fitA3.predict(XA_test)
mserror = np.sqrt(MSE(preds,yA_test))
print(rsq)
print(adj_rsq)
print(preds)
print(mserror)
print(encvA3.alpha_)
print(encvA3.l1_ratio_)
```
Still confused...
## Model B - Elastic Net (polynomial transformation)
```
## Establishing x and y
XB = X_ttrain
yB = y_ttrain
ModelB = make_pipeline(PolynomialFeatures(2), LinearRegression())
fit = ModelB.fit(XB, yB)
rsq = fit.score(XB, yB)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yB)-XB.shape[1]-1)
print(rsq)
print(adj_rsq)
ModelB = make_pipeline(PolynomialFeatures(3), ElasticNetCV(l1_ratio = .5))
fit = ModelB.fit(XB, yB)
rsq = fit.score(XB, yB)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yB)-XB.shape[1]-1)
print(rsq)
print(adj_rsq)
```
... Hm ... Not great. But we'll test it anyway.
** TESTING **
```
encvB = make_pipeline(PolynomialFeatures(2), LinearRegression())
fitB = encvB.fit(XB, yB)
r2_training = encvB.score(X_ttest, y_ttest)
#y= np.trim_zeros(encvB.fit(XB,yB).coef_)
#r2adj_training = 1 - (1-r2_training)*(XB.shape[1]-1)/(XB.shape[1]-len(y)-1)
preds = fitB.predict(X_ttest)
rmse = np.sqrt(MSE(y_ttest, preds))
print('R2:', r2_training)
print('R2 Adjusted:', r2adj_training)
print('Output Predictions', preds)
print('RMSE:', rmse)
print('Alpha:',encvB_steps.elasticnetcv.alpha_)
print('L1:',encvB.named_steps.elasticnetcv.l1_ratio_)
#Testing the predictability of the model with this alpha = 0.5
XB_train, XB_test, yB_train, yB_test = train_test_split(XB, yB, test_size=0.2)
modelB = make_pipeline(PolynomialFeatures(2), ElasticNetCV(l1_ratio = .5))
modelB.fit(XB_train, yB_train)
rsq = modelB.score(XB_train,yB_train)
adj_rsq = 1 - (1-rsq)*(len(yB_train)-1)/(len(yB_train)-XB_train.shape[1]-1)
preds = fitA3.predict(XB_test)
mserror = np.sqrt(MSE(preds,yB_test))
print(rsq)
print(adj_rsq)
print(preds)
print(mserror)
print(modelB.named_steps.elasticnetcv.alpha_)
print(modelB.named_steps.elasticnetcv.l1_ratio_)
```
## Model C - Elastic Net CV with transformations
On second review, none of the inputs would benefit from transformations
```
C_train = ttrain
C_train['new_firsts_log']=np.log(C_train.Firsts)
C_train
#C_train.new_firsts_log.str.replace('-inf', '0')
```
## Predicting Today's Race!
```
todays_race = pd.read_csv('big_race_day').drop('Unnamed: 0', axis = 1).drop('Horse Name', axis =1)
## today_race acting as testing x
todays_race
```
### Maiden Fems Prediction
```
ym_train = MaidenFems['Final_Time_Hund']
xm_train = MaidenFems.drop('Final_Time_Hund',1).drop('Horse Name',1).drop('Final_Time',1)
enMaid = ElasticNetCV(.90)
fitMaid = enMaid.fit(xm_train, ym_train)
preds = fitMaid.predict(todays_race)
r2_training = enMaid.score(xm_train, ym_train)
y= np.trim_zeros(enMaid.fit(xm_train,ym_train).coef_)
adj_rsq = 1 - (1-r2_training)*(len(xm_train)-1)/(len(xm_train)-xm_train.shape[1]-len(y)-1)
print('Output Predictions', preds)
print('R2:', r2_training)
print('R2 Adjusted:', adj_rsq)
print('Alpha:',enMaid.alpha_)
print('L1:',enMaid.l1_ratio_)
print('Coefficients:',fitMaid.coef_)
elastic_coef = enMaid.fit(xm_train, ym_train).coef_
_ = plt.bar(range(len(xm_train.columns)), elastic_coef)
_ = plt.xticks(range(len(xm_train.columns)), xm_train.columns, rotation=45)
_ = plt.ylabel('Coefficients')
plt.show()
finalguesses_Maiden = [{'Horse Name': 'Lady Lemon Drop' ,'Maiden Horse Guess': 10116.53721999},
{'Horse Name': 'Curlins Prize' ,'Maiden Horse Guess': 10097.09521978},
{'Horse Name': 'Luminoso' ,'Maiden Horse Guess':10063.11500294},
{'Horse Name': 'Party Dancer' ,'Maiden Horse Guess': 10069.32339855},
{'Horse Name': 'Bring on the Band' ,'Maiden Horse Guess': 10054.64900894},
{'Horse Name': 'Rockin Ready' ,'Maiden Horse Guess': 10063.67940254},
{'Horse Name': 'Rattle' ,'Maiden Horse Guess': 10073.93665433},
{'Horse Name': 'Curlins Journey' ,'Maiden Horse Guess': 10072.45966259},
{'Horse Name': 'Heaven Escape' ,'Maiden Horse Guess':10092.43120946}]
```
### EN-CV prediction
```
encvL = ElasticNetCV(l1_ratio = 0.99)
fiten = encvL.fit(X_ttrain, y_ttrain)
preds = fiten.predict(todays_race)
r2_training = encvL.score(X_ttrain, y_ttrain)
y = np.trim_zeros(encvL.fit(X_ttrain,y_ttrain).coef_)
adj_rsq = 1 - (1-r2_training)*(len(X_ttrain)-1)/(len(X_ttrain)-X_ttrain.shape[1]-len(y)-1)
print('Output Predictions', preds)
print('R2:', r2_training)
print('R2 Adjusted:', adj_rsq)
print('Alpha:',encv.alpha_)
print('L1:',encv.l1_ratio_)
print('Coefficients:',fiten.coef_)
elastic_coef = encvL.fit(X_ttrain, y_ttrain).coef_
_ = plt.bar(range(len(X_ttrain.columns)), elastic_coef)
_ = plt.xticks(range(len(X_ttrain.columns)), X_ttrain.columns, rotation=45)
_ = plt.ylabel('Coefficients')
plt.show()
finalguesses_EN = [{'Horse Name': 'Lady Lemon Drop' ,'Guess': 9609.70585871},
{'Horse Name': 'Curlins Prize' ,'Guess': 9645.82659915},
{'Horse Name': 'Luminoso' ,'Guess':9558.93257549},
{'Horse Name': 'Party Dancer' ,'Guess': 9564.01963654},
{'Horse Name': 'Bring on the Band' ,'Guess': 9577.9212198},
{'Horse Name': 'Rockin Ready' ,'Guess': 9556.46879067},
{'Horse Name': 'Rattle' ,'Guess': 9549.09508205},
{'Horse Name': 'Curlins Journey' ,'Guess': 9546.58621572},
{'Horse Name': 'Heaven Escape' ,'Guess':9586.917829}]
```
### Linear Regression prediction
```
Xlin = X_ttrain
ylin = y_ttrain
lr = LinearRegression()
lrfit = lr.fit(Xlin, ylin)
preds = lrfit.predict(todays_race)
r2_training = lr.score(Xlin, ylin)
r2adj_training = 1 - (1-r2_training)*(len(ylin)-1)/(len(ylin)-Xlin.shape[1]-1)
print('Output Predictions', preds)
print('R2:', r2_training)
print('R2 Adjusted:', r2adj_training)
elastic_coef = lrfit.fit(Xlin, ylin).coef_
_ = plt.bar(range(len(Xlin.columns)), elastic_coef)
_ = plt.xticks(range(len(Xlin.columns)), Xlin.columns, rotation=45)
_ = plt.ylabel('Coefficients')
plt.show()
finalguesses_Lin = [{'Horse Name': 'Lady Lemon Drop' ,'Guess': 9720.65585682},
{'Horse Name': 'Curlins Prize' ,'Guess': 9746.17852003},
{'Horse Name': 'Luminoso' ,'Guess':9608.10444379},
{'Horse Name': 'Party Dancer' ,'Guess': 9633.58532183},
{'Horse Name': 'Bring on the Band' ,'Guess': 9621.04698335},
{'Horse Name': 'Rockin Ready' ,'Guess': 9561.82026773},
{'Horse Name': 'Rattle' ,'Guess': 9644.13062968},
{'Horse Name': 'Curlins Journey' ,'Guess': 9666.24092249},
{'Horse Name': 'Heaven Escape' ,'Guess':9700.56665335}]
```
### Setting the data frames
```
GuessLin = pd.DataFrame(finalguesses_Lin)
GuessMaid = pd.DataFrame(finalguesses_Maiden)
GuessEN = pd.DataFrame(finalguesses_EN)
GuessLin.sort_values('Guess')
GuessMaid.sort_values('Maiden Horse Guess')
GuessEN.sort_values('Guess')
```
|
github_jupyter
|
```
from sympy import pi, cos, sin, symbols
from sympy.utilities.lambdify import implemented_function
import pytest
from sympde.calculus import grad, dot
from sympde.calculus import laplace
from sympde.topology import ScalarFunctionSpace
from sympde.topology import element_of
from sympde.topology import NormalVector
from sympde.topology import Square
from sympde.topology import Union
from sympde.expr import BilinearForm, LinearForm, integral
from sympde.expr import Norm
from sympde.expr import find, EssentialBC
from sympde.expr.expr import linearize
from psydac.fem.basic import FemField
from psydac.api.discretization import discretize
x,y,z = symbols('x1, x2, x3')
```
# Non-Linear Poisson in 2D
In this section, we consider the non-linear Poisson problem:
$$
-\nabla \cdot \left( (1+u^2) \nabla u \right) = f, \Omega
\\
u = 0, \partial \Omega
$$
where $\Omega$ denotes the unit square.
For testing, we shall take a function $u$ that fulfills the boundary condition, the compute $f$ as
$$
f(x,y) = -\nabla^2 u - F(u)
$$
The weak formulation is
$$
\int_{\Omega} (1+u^2) \nabla u \cdot \nabla v ~ d\Omega = \int_{\Omega} f v ~d\Omega, \quad \forall v \in \mathcal{V}
$$
For the sack of generality, we shall consider the linear form
$$
G(v;u,w) := \int_{\Omega} (1+w^2) \nabla u \cdot \nabla v ~ d\Omega, \quad \forall u,v,w \in \mathcal{V}
$$
Our problem is then
$$
\mbox{Find } u \in \mathcal{V}, \mbox{such that}\\
G(v;u,u) = l(v), \quad \forall v \in \mathcal{V}
$$
where
$$
l(v) := \int_{\Omega} f v ~d\Omega, \quad \forall v \in \mathcal{V}
$$
#### Topological domain
```
domain = Square()
B_dirichlet_0 = domain.boundary
```
#### Function Space
```
V = ScalarFunctionSpace('V', domain)
```
#### Defining the Linear form $G$
```
u = element_of(V, name='u')
v = element_of(V, name='v')
w = element_of(V, name='w')
# Linear form g: V --> R
g = LinearForm(v, integral(domain, (1+w**2)*dot(grad(u), grad(v))))
```
#### Defining the Linear form L
```
solution = sin(pi*x)*sin(pi*y)
f = 2*pi**2*(sin(pi*x)**2*sin(pi*y)**2 + 1)*sin(pi*x)*sin(pi*y) - 2*pi**2*sin(pi*x)**3*sin(pi*y)*cos(pi*y)**2 - 2*pi**2*sin(pi*x)*sin(pi*y)**3*cos(pi*x)**2
# Linear form l: V --> R
l = LinearForm(v, integral(domain, f * v))
```
### Picard Method
$$
\mbox{Find } u_{n+1} \in \mathcal{V}_h, \mbox{such that}\\
G(v;u_{n+1},u_n) = l(v), \quad \forall v \in \mathcal{V}_h
$$
### Newton Method
Let's define
$$
F(v;u) := G(v;u,u) -l(v), \quad \forall v \in \mathcal{V}
$$
Newton method writes
$$
\mbox{Find } u_{n+1} \in \mathcal{V}_h, \mbox{such that}\\
F^{\prime}(\delta u,v; u_n) = - F(v;u_n), \quad \forall v \in \mathcal{V} \\
u_{n+1} := u_{n} + \delta u, \quad \delta u \in \mathcal{V}
$$
#### Computing $F^{\prime}$ the derivative of $F$
**SymPDE** allows you to linearize a linear form and get a bilinear form, using the function **linearize**
```
F = LinearForm(v, g(v,w=u)-l(v))
du = element_of(V, name='du')
Fprime = linearize(F, u, trials=du)
```
## Picard Method
#### Abstract Model
```
un = element_of(V, name='un')
# Bilinear form a: V x V --> R
a = BilinearForm((u, v), g(v, u=u,w=un))
# Dirichlet boundary conditions
bc = [EssentialBC(u, 0, B_dirichlet_0)]
# Variational problem
equation = find(u, forall=v, lhs=a(u, v), rhs=l(v), bc=bc)
# Error norms
error = u - solution
l2norm = Norm(error, domain, kind='l2')
```
#### Discretization
```
# Create computational domain from topological domain
domain_h = discretize(domain, ncells=[16,16], comm=None)
# Discrete spaces
Vh = discretize(V, domain_h, degree=[2,2])
# Discretize equation using Dirichlet bc
equation_h = discretize(equation, domain_h, [Vh, Vh])
# Discretize error norms
l2norm_h = discretize(l2norm, domain_h, Vh)
```
#### Picard solver
```
def picard(niter=10):
Un = FemField( Vh, Vh.vector_space.zeros() )
for i in range(niter):
Un = equation_h.solve(un=Un)
# Compute error norms
l2_error = l2norm_h.assemble(u=Un)
print('l2_error = ', l2_error)
return Un
Un = picard(niter=5)
from matplotlib import pyplot as plt
from utilities.plot import plot_field_2d
nbasis = [w.nbasis for w in Vh.spaces]
p1,p2 = Vh.degree
x = Un.coeffs._data[p1:-p1,p2:-p2]
u = x.reshape(nbasis)
plot_field_2d(Vh.knots, Vh.degree, u) ; plt.colorbar()
```
## Newton Method
#### Abstract Model
```
# Dirichlet boundary conditions
bc = [EssentialBC(du, 0, B_dirichlet_0)]
# Variational problem
equation = find(du, forall=v, lhs=Fprime(du, v,u=un), rhs=-F(v,u=un), bc=bc)
```
#### Discretization
```
# Create computational domain from topological domain
domain_h = discretize(domain, ncells=[16,16], comm=None)
# Discrete spaces
Vh = discretize(V, domain_h, degree=[2,2])
# Discretize equation using Dirichlet bc
equation_h = discretize(equation, domain_h, [Vh, Vh])
# Discretize error norms
l2norm_h = discretize(l2norm, domain_h, Vh)
```
#### Newton Solver
```
def newton(niter=10):
Un = FemField( Vh, Vh.vector_space.zeros() )
for i in range(niter):
delta_x = equation_h.solve(un=Un)
Un = FemField( Vh, delta_x.coeffs + Un.coeffs )
# Compute error norms
l2_error = l2norm_h.assemble(u=Un)
print('l2_error = ', l2_error)
return Un
un = newton(niter=5)
nbasis = [w.nbasis for w in Vh.spaces]
p1,p2 = Vh.degree
x = un.coeffs._data[p1:-p1,p2:-p2]
u = x.reshape(nbasis)
plot_field_2d(Vh.knots, Vh.degree, u) ; plt.colorbar()
```
|
github_jupyter
|
```
import re
```
The re module uses a backtracking regular expression engine
Regular expressions match text patterns
Use case examples:
- Check if an email or phone number was written correctly.
- Split text by some mark (comma, dot, newline) which may be useful to parse data.
- Get content from HTML tags.
- Improve your linux command skills.
However ...
>Some people, when confronted with a problem, think "I know, I'll use regular expressions". Now they have two problems - Jamie Zawinski, 1997
## **Python String**
\begin{array}{ccc}
\hline Type & Prefixed & Description \\\hline
\text{String} & - & \text{They are string literals. They're Unicode. The backslash is
necessary to escape meaningful characters.} \\
\text{Raw String} & \text{r or R} & \text{They're equal to literal strings with the exception of the
backslashes, which are treated as normal characters.} \\
\text{Byte String} & \text{b or B} & \text{Strings represented as bytes. They can only contain ASCII
characters; if the byte is greater than 128, it must be escaped.} \\
\end{array}
```
#Normal String
print("feijão com\t limão")
#Raw String
print(r"feijão com\t limão")
#Byte String
print(b"feij\xc3\xa3o com\t lim\xc3\xa3o")
str(b"feij\xc3\xa3o com\t lim\xc3\xa3o", "utf-8")
```
## **General**
Our build blocks are composed of:
- Literals
- Metacharacter
- Backslash: \\
- Caret: \^
- Dollar Sign: \$
- Dot: \.
- Pipe Symbol: \|
- Question Mark: \?
- Asterisk: \*
- Plus sign: \+
- Opening parenthesis: \(
- Closing parenthesis: \)
- Opening square bracket: \[
- The opening curly brace: \{
### **Literals**
```
"""
version 1: with compile
"""
def areYouHungry_v1(pattern, text):
match = pattern.search(text)
if match: print("HERE !!!\n")
else: print("Sorry pal, you'll starve to death.\n")
helloWorldRegex = r"rodrigo"
pattern = re.compile(helloWorldRegex)
text1 = r"Where can I find food here? - rodrigo"
text2 = r"Where can I find food here? - Rodrigo"
areYouHungry_v1(pattern,text1)
areYouHungry_v1(pattern,text2)
"""
version 2: without compile
"""
def areYouHungry_v2(regex, text):
match = re.search(regex, text)
if match: print("HERE !!!\n")
else: print("Sorry pal, you'll starve to death.\n")
helloWorldRegex = r"rodrigo"
text1 = r"Where can I find food here? - rodrigo"
text2 = r"Where can I find food here? - Rodrigo"
areYouHungry_v2(helloWorldRegex, text1)
areYouHungry_v2(helloWorldRegex, text2)
```
### **Character classes**
```
"""
version 3: classes
"""
def areYouHungry_v3(pattern, text):
match = pattern.search(text)
if match: print("Beer is also food !!\n")
else: print("Sorry pal, you'll starve to death.\n")
helloWorldRegex = r"[rR]odrigo"
pattern = re.compile(helloWorldRegex)
text1 = r"Where can I find food here? - rodrigo"
text2 = r"Where can I find food here? - Rodrigo"
areYouHungry_v3(pattern,text1)
areYouHungry_v3(pattern,text2)
```
Usual Classes:
- [0-9]: Matches anything between 0 and 9.
- [a-z]: Matches anything between a and z.
- [A-Z]: Matches anything between A and Z.
Predefined Classes:
- **\.** : Matches everything except newline.
- Lower Case classes:
- \d : Same as [0-9].
- \s : Same as [ \t\n\r\f\v] the first character of the class is the whitespace character.
- \w : Same as [a-zA-Z0-9_] the last character of the class is the underscore character.
- Upper Case classes (the negation):
- \D : Matches non decimal digit, same as [^0-9].
- \S : Matches any non whitespace character [^ \t\n\r\f\v].
- \W : Matches any non alphanumeric character [^a-zA-Z0-9_] .
Both codes will do the same ...
The re module keeps a cache with come compiled regex so you do not need to compile the regex everytime you call the function (technique called memoization).
The first version just give you a fine control ...
```Pattern``` was **re.Pattern** variable which has a lot of methods. Let's find out with methods are there using regular expression !!
```
helloWorldRegex = r"[rR]odrigo"
pattern = re.compile(helloWorldRegex)
patternText = "\n".join(dir(pattern))
patternText
#Regex for does not start with “__”
pattern_list_methods = set(re.findall(r"^(?!__).*$", patternText, re.M))
to_delete = ["fullmatch", "groupindex", "pattern", "scanner"]
pattern_list_methods.difference_update(to_delete)
print(pattern_list_methods)
```
- RegexObject: It is also known as Pattern Object. It represents a compiled regular expression
- MatchObject: It represents the matched pattern
### Regex Behavior
```
def isGotcha(match):
if match: print("Found it")
else: print("None here")
data = "aaabbbccc"
match1 = re.match("\w+", data)
isGotcha(match1)
match2 = re.match("bbb",data)
isGotcha(match2)
match3 = re.search("bbb",data)
isGotcha(match3)
```
\begin{array}{rrr}
\hline \text{match1} & \text{match2} & \text{match3}\\\hline
\text{aaabbbccc} & \text{aaabbbccc} & \text{aaabbbccc}\\
\text{aabbbccc} & \text{returns None} & \text{aabbbccc}\\
\text{abbbccc} & - & \text{abbbccc}\\
\text{bbbccc} & - & \text{bbbccc}\\
\text{bbccc} & - & \text{bbccc}\\
\text{bccc} & - & \text{bccc}\\
\text{ccc} & - & \text{returns Match}\\
\text{cc} & - & - \\
\text{c} & - & - \\
\text{returns None} & - & -
\end{array}
### Greedy Behavior
```
text = "<b>foo</b> and <i>so on</i>"
match = re.match("<.*>",text)
print(match)
print(match.group())
text = "<b>foo</b> and <i>so on</i>"
match = re.match("<.*?>",text)
print(match)
print(match.group())
```
The non-greedy behavior can be requested by adding an extra question mark to the quantifier; for example, ??, *? or +?. A quantifier marked as reluctant will behave like the exact opposite of the greedy ones. They will try to have the smallest match possible.
## **Problem 1** - Phone Number
### **Search**
```
def isThere_v1(regexObject, text):
if regexObject: return f"Your number is: {regexObject.group()}!"
else: return "Hey! I did not find it."
text = """ 9-96379889
96379889
996379889
9-9637-9889
42246889
4224-6889
99637 9889
9 96379889
"""
#The first character is not a number, but a whitespace.
regex1 = re.search(r"\d?", text)
#Removing the whitespace character we find the number ! The ? operator means optional
regex2 = re.search(r"\d?", text.strip())
#Then, it could appear a optional whitespace or -. We also get two decimal character with \d\d
regex3 = re.search(r"\d?-?\d\d", text.strip())
#However we want more than one decimal chracter. This can be achievied by using the + operator
regex4 = re.search(r"\d?-?\d+", text.strip())
#Looking backwards $
regex5 = re.search(r"\d?-?\d+$", text.strip())
#Using class to get - or whitespace
regex6 = re.search(r"\d?[-\s]?\d+$", text.strip())
regex_lst = [regex1, regex2, regex3, regex4, regex5, regex6]
for index, regex in enumerate(regex_lst):
print(f"Regex Number {index+1}")
print(isThere_v1(regex,text) + "\n")
```
### **Findall**
```
def isThere_v2(regexObject, text):
if regexObject: return f"Uow phone numbers:\n{regexObject} !"
else: return "Hey! I did not find it."
text = """ 996349889
96359889
9-96349889
9-9634-9889
42256889
4225-6889
99634 9889
9 96349889
"""
#findall looks for every possible match.
regex7 = re.findall(r"\d?[-\s]?\d+", text)
"""
Why is [... ' 9', '-96349889' ...] splited?
Step1: \d? is not consumed.
Step2: [-\s]? the whitespace is consumed.
Step3: \d+ Consumes 9 and stop due to the - character.
Therefore ' 9' is recognized.
"""
regex8 = re.findall(r"\d?[-\s]?\d+[-\s]?\d+", text.strip())
"""
Why is [... ' 9-9634', '-9889' ...] splited?
Step1: \d? is consumed.
Step2: [-\s]? is consumed.
Step3: \d+ Consumes until the - character
Step4: [-\s]? is not consumed
Step5: \d+ is ignored because the first decimal was consumed in Step3
Therefore ' 9-9634' is recognized.
"""
#Adds a restrition of 4 decimals in the first part.
regex9 = re.findall(r"\d?[-\s]?\d{4}[-\s]?\d+", text.strip())
#Adds a restrition of 4 decimals in the second part forcing a number after the whitespace.
regex10 = re.findall(r"\d?[-\s]?\d{4}[-\s]?\d{4}", text.strip())
regex_lst = [regex7, regex8, regex9, regex10]
for index, regex in enumerate(regex_lst):
print(f"Regex Number {index+7}")
print(isThere_v2(regex,text) + "\n")
text_dirty = r"""996379889
96379889
9-96379889
9-9637-9889
42246889
4224-6889
99637 9889
9 96379889
777 777 777
90 329921 0
9999999999 9
8588588899436
"""
#Regex 10
regex_dirty1 = re.findall(r"\d?[-\s]?\d{4}[-\s]?\d{4}", text_dirty.strip())
#Adding Negative look behind and negative look ahead
regex_dirty2 = re.findall(r"(?<!\d)\d?[-\s]?\d{4}[-\s]?\d{4}(?!\d)", text_dirty.strip())
#\b is a word boundary which depend on the contextkey contexts.
regex_dirty3 = re.findall(r"\b\d?[-\s]?\d{4}[-\s]?\d{4}\b", text_dirty.strip())
regex_dirty_lst = [regex_dirty1, regex_dirty2, regex_dirty3]
for index, result in enumerate(map(lambda x: isThere_v2(x,text_dirty), regex_dirty_lst)):
print(f"Regex Dirty Number {index+1}")
print(result + "\n")
```
### **Finditer**
This is a lazy method !!
```
real_text_example = """
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis viverra consectetur sodales. Vestibulum consequat,
risus in sollicitudin imperdiet, velit 996379889 elit congue sem, vitae aliquet ligula mi eget justo. Nulla facilisi.
Maecenas a egestas nisi. Morbi purus dolor, ornare ac dui a, eleifend dignissim nunc. Proin pellentesque dolor non lectus pellentesque
tincidunt. Ut et 345.323.343-9 tempus orci. Duis molestie 9 96379889 cursus tortor vitae pretium. 4224-6889 Donec non sapien neque. Pellentesque urna ligula, finibus a lectus sit amet
, ultricies cursus metus. Quisque eget orci et turpis faucibus 4224-6889 pharetra.
"""
match_genarator = re.finditer(r'\b((?:9[ -]?)?\d{4}[ \-]?\d{4})\b', real_text_example.strip())
for match in match_genarator:
print(f"Phone Number: {match.group()}\nText Position: {match.span()}\n")
```
## **Problem 2** - Format Text
### **Groups**
```
email_text = "hey my email is: localPart@domain"
```
Using the parenthesis it is possible to capture a group:
```
match = re.search("(\w+)@(\w+)", email_text)
print(match.group(1))
print(match.group(2))
```
Using the following syntax it is possible to give a name to the group:
```?P<name>pattern```
```
match = re.search("(?P<localPart>\w+)@(?P<domain>\w+)", email_text)
print(match.group("localPart"))
print(match.group("domain"))
```
### **Sub**
Suppose a text with the following structure:
```
time - day | usage | id, description \n
```
Definitely, a unique separator should be used .. However life is tough.
```
my_txt = r"""
20:18:14 - 21/01 | 0.65 | 3947kedj, No |dia| em que eu saí de casa
25:32:26 - 11/07 | 0.80 | 5679lqui, Minha mãe me disse: |filho|, vem cá
12:13:00 - 12/06 | 0.65 | 5249dqok, Passou a mão em meu cabelos
23:12:35 - 13/03 | 0.77 | 3434afdf, Olhou em meus |olhos|, começou falar
20:22:00 - 12/02 | 0.98 | 1111absd, We are the champions, my friends
22:12:00 - 07/03 | 0.65 | 4092bvds, And we'll keep on |fighting| till the end
22:52:59 - 30/02 | 0.41 | 9021poij, We are the |champions|, we are the champions
21:47:00 - 28/03 | 0.15 | 6342fdpo, No time for |losers|, 'cause we are the champions
19:19:00 - 31/08 | 0.30 | 2314qwen, of the |world|
00:22:21 - 99/99 | 0.00 | 0000aaaa,
"""
print(my_txt)
#\g<name> to reference a group.
pattern = re.compile(r"(?P<time>\d{2}:\d{2}:\d{2}) - (?P<day>\d{2}/\d{2})")
text = pattern.sub("\g<day> - \g<time>",my_txt)
print(text)
#pattern new_text texto
"""
Changes an optional whitespace with the - caracter to a comma (,).
"""
text = re.sub(r"\s?-", ',', my_txt)
print(text)
#pattern new_text texto
"""
Do not forget to escape meaninful caracters :)
the dot character is escaped however, the pipe character is not :(
"""
pattern = r"(?P<time>\d{2}:\d{2}:\d{2}) - (?P<day>\d{2}/\d{2}) | (?P<usage>\d\.\d{2}) | (?P<id>\d{4}\w{4})"
new_text = r"\g<time>, \g<day>, \g<usage>, \g<id>"
text = re.sub(pattern, new_text, my_txt)
print(text)
#pattern new_text texto
pattern = "(?P<time>\d{2}:\d{2}:\d{2}) - (?P<day>\d{2}/\d{2}) \| (?P<usage>\d\.\d{2}) \| (?P<id>\d{4}\w{4})"
new_text = "\g<time>, \g<day>, \g<usage>, \g<id>"
text = re.sub(pattern, new_text, my_txt)
print(text)
```
### **Subn**
Similar to ```sub```. It returns a tuple with the new string and the number of substitutions made.
```
#pattern new_text texto
pattern = "\|"
new_text = ""
clean_txt, mistakes_count = re.subn(pattern, new_text, text)
print(f"Clean Text Result:\n{clean_txt}")
print(f"How many mistakes did I make it?\n{mistakes_count}")
```
### **Groupdict**
```
#pattern new_text texto
"""
Do not forget to escape meaninful caracters :)
the dot character is escaped however, the pipe character is not :(
"""
pattern = r"(?P<time>\d{2}:\d{2}:\d{2}) - (?P<day>\d{2}/\d{2}) \| (?P<usage>\d\.\d{2}) \| (?P<id>\d{4}\w{4})"
matchs = re.finditer(pattern, my_txt)
for match in matchs:
print(match.groupdict())
```
## **Performance**
>Programmers waste enormous amounts of time thinking about, or worrying
about, the speed of noncritical parts of their programs, and these attempts at
efficiency actually have a strong negative impact when debugging and maintenance
are considered. We should forget about small efficiencies, say about 97% of the
time: premature optimization is the root of all evil. Yet we should not pass up our
opportunities in that critical 3%. - Donald Knuth
General:
- Don't be greedy.
- Reuse compiled patterns.
- Be specific.
## References
Book:
- Mastering Python Regular Expressions (PACKT) - by: Félix López Víctor Romero
Links:
- https://developers.google.com/edu/python/regular-expressions
|
github_jupyter
|
# A simple DNN model built in Keras.
Let's start off with the Python imports that we need.
```
import os, json, math
import numpy as np
import shutil
import tensorflow as tf
print(tf.__version__)
```
## Locating the CSV files
We will start with the CSV files that we wrote out in the [first notebook](../01_explore/taxifare.iypnb) of this sequence. Just so you don't have to run the notebook, we saved a copy in ../data
```
!ls -l ../data/*.csv
```
## Use tf.data to read the CSV files
We wrote these cells in the [third notebook](../03_tfdata/input_pipeline.ipynb) of this sequence.
```
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
def features_and_labels(row_data):
for unwanted_col in ['pickup_datetime', 'key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
.cache())
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
## Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
INPUT_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count']
# input layer
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in INPUT_COLS
}
feature_columns = {
colname : tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = tf.keras.layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = tf.keras.layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='fare')(h2)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
model = build_dnn_model()
print(model.summary())
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
```
## Train model
To train the model, call model.fit()
```
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 5 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('../data/taxi-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../data/taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
```
## Predict with model
This is how you'd predict with this model.
```
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
})
```
Of course, this is not realistic, because we can't expect client code to have a model object in memory. We'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
## Export model
Let's export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc.
```
# This doesn't work yet.
shutil.rmtree('./export/savedmodel', ignore_errors=True)
tf.keras.experimental.export_saved_model(model, './export/savedmodel')
# Recreate the exact same model
new_model = tf.keras.experimental.load_from_saved_model('./export/savedmodel')
# try predicting with this model
new_model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
})
```
In the next notebook, we will improve this model through feature engineering.
Copyright 2019 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
github_jupyter
|
```
import numpy as np
import torch
import pandas as pd
import json
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder as LE
import bisect
import torch
from datetime import datetime
from sklearn.model_selection import train_test_split
!cp -r drive/My\ Drive/T11 ./T11
np.random.seed(22)
torch.manual_seed(22)
with open('T11/batsmen.json', 'r') as f:
batsmen = json.load(f)
with open('T11/bowlers.json', 'r') as f:
bowlers = json.load(f)
batsmen = {k: [x for x in v if x[1][1]>=0] for k,v in batsmen.items()}
batsmen = {k: sorted(v, key=lambda x : x[0]) for k,v in batsmen.items() if v}
bowlers = {k: sorted(v, key=lambda x : x[0]) for k,v in bowlers.items() if v}
def getBatScores(scores):
#runs, balls, boundaries, contribs, out
array = []
for score in scores:
date = score[0]
_, runs, balls, fours, sixes, _, contrib = score[1]
boundaries = fours + sixes * 1.5
array.append((date, np.array([runs, balls, boundaries, contrib])))
return array
def getBowlScores(scores):
#overs, maidens, runs, wickets, contribs
array = []
for score in scores:
date = score[0]
overs, maidens, runs, wickets, _, contrib = score[1]
overs = int(overs) + (overs-int(overs))*10/6
array.append((date, np.array([overs, maidens, runs, wickets, contrib])))
return array
batsmen_scores = {k:getBatScores(v) for k,v in batsmen.items()}
bowlers_scores = {k:getBowlScores(v) for k,v in bowlers.items()}
_batsmen_scores = {k:{_v[0]: _v[1] for _v in v} for k,v in batsmen_scores.items()}
_bowlers_scores = {k:{_v[0]: _v[1] for _v in v} for k,v in bowlers_scores.items()}
att = pd.read_csv('T11/attributes.csv')
att['BatHand']=0+(att['Bats'].str.find('eft')>0)
att['BowlHand']=0+(att['Bowls'].str.find('eft')>0)
att['BowlType']=0+((att['Bowls'].str.find('ast')>0) | (att['Bowls'].str.find('edium')>0))
def getBatStats(scores):
dates, scorelist = [score[0] for score in scores], [score[1] for score in scores]
scorelist = np.array(scorelist)
cumscores = np.cumsum(scorelist, axis=0)
innings = np.arange(1, cumscores.shape[0]+1)
average = cumscores[:, 0]/innings
sr = cumscores[:, 0]/(cumscores[:, 1]+1)
contrib = cumscores[:, 3]/innings
stats = np.array([innings, average, sr, contrib]).T
return [datetime.strptime(date, "%Y-%m-%d") for date in dates], stats
def getBowlStats(scores):
dates, scorelist = [score[0] for score in scores], [score[1] for score in scores]
scorelist = np.array(scorelist)
cumscores = np.cumsum(scorelist, axis=0)
overs = cumscores[:, 0]
overs = overs.astype('int32')+10/6*(overs - overs.astype('int32'))
runs = cumscores[:, 2]
economy = runs/overs
wickets = cumscores[:, 3]
average = wickets/(runs+1)
sr = wickets/overs
contrib = cumscores[:, 4]/np.arange(1, cumscores.shape[0]+1)
stats = np.array([overs, average, economy, sr, contrib]).T
return [datetime.strptime(date, "%Y-%m-%d") for date in dates], stats
batsmen_stats = {key:getBatStats(getBatScores(v)) for key,v in batsmen.items()}
bowlers_stats = {key:getBowlStats(getBowlScores(v)) for key,v in bowlers.items()}
with open('T11/scorecard.json', 'r') as f:
scorecards = json.load(f)
position = dict()
for code, match in scorecards.items():
for pos, batsmen in enumerate(match['BATTING1']):
if batsmen[0] in position:
position[batsmen[0]].append(pos+1)
else:
position[batsmen[0]]=[pos+1]
for pos, batsmen in enumerate(match['BATTING2']):
if batsmen[0] in position:
position[batsmen[0]].append(pos+1)
else:
position[batsmen[0]]=[pos+1]
position = {int(k):max(set(v), key = v.count) for k,v in position.items()}
for missing in set(att['Code']) - set(position.keys()):
position[missing]=0
with open('T11/region.json','r') as f:
region = json.load(f)
with open('T11/tmap.json','r') as f:
tmap = json.load(f)
matches = pd.read_csv('T11/matches.csv')
att['BatPos']=att['Code'].apply(lambda x : position[x])
matches['GroundCode']=matches['GroundCode'].apply(lambda x : region[str(x)])
matches=matches[pd.to_datetime(matches['Date'], format='%Y-%m-%d')>"1990-01-01"]
df_cards = pd.DataFrame(scorecards).transpose()
df_cards = df_cards[df_cards.index.astype(int).isin(matches['MatchCode'])]
matches = matches[matches['MatchCode'].isin(df_cards.index.astype(int))]
att=pd.get_dummies(att, columns=['BatPos'])
le = {
'GC' : LE(),
'Team' : LE(),
'Venue' : LE(),
}
le['Team'].fit((matches['Team_1'].tolist())+(matches['Team_2'].tolist()))
matches['Team_1']=le['Team'].transform(matches['Team_1'])
matches['Team_2']=le['Team'].transform(matches['Team_2'])
matches['Venue']=le['Venue'].fit_transform(matches['Venue'])
matches['GroundCode']=le['GC'].fit_transform(matches['GroundCode'])
matches
patts = att[['BatHand', 'BowlHand', 'BowlType', 'BatPos_0', 'BatPos_1', 'BatPos_2', 'BatPos_3', 'BatPos_4', 'BatPos_5', 'BatPos_6', 'BatPos_7', 'BatPos_8', 'BatPos_9', 'BatPos_10']].values
pcodes = att['Code'].tolist()
attdict = dict()
for i,pc in enumerate(pcodes):
attdict[pc]=patts[i]
df_cards['MatchCode']=df_cards.index.astype(int)
matches=matches.sort_values(by='MatchCode')
df_cards=df_cards.sort_values(by='MatchCode')
df_cards.reset_index(drop=True, inplace=True)
matches.reset_index(drop=True, inplace=True)
df_cards['BAT2']=le['Team'].transform(df_cards['ORDER'].apply(lambda x : tmap[x[1]]))
df_cards['BAT1']=le['Team'].transform(df_cards['ORDER'].apply(lambda x : tmap[x[0]]))
df_cards['RUN1']=df_cards['SCORES'].apply(lambda x : x[0])
df_cards['RUN2']=df_cards['SCORES'].apply(lambda x : x[1])
df_cards['TOSS']=le['Team'].transform(df_cards['TOSS'].apply(lambda x : tmap[x]))
df = pd.merge(matches, df_cards)
df['PLAYERS1']=df['BATTING1'].apply(lambda x : [y[0] for y in x])
df['PLAYERS2']=df['BATTING2'].apply(lambda x : [y[0] for y in x])
_BAT1, _BAT2, _BOW1, _BOW2 = df['PLAYERS1'].tolist(), df['PLAYERS2'].tolist(), [[_x[0] for _x in x] for x in df['BOWLING1'].tolist()], [[_x[0] for _x in x] for x in df['BOWLING2'].tolist()]
for i in range(len(_BAT1)):
try:
_BAT1[i].append(list(set(_BOW2[i])-set(_BAT1[i]))[0])
_BAT2[i].append(list(set(_BOW1[i])-set(_BAT2[i]))[0])
except:
pass
df['PLAYERS1'], df['PLAYERS2'] = _BAT1, _BAT2
df=df[['Date', 'Team_1', 'Team_2', 'Venue', 'GroundCode', 'TOSS', 'BAT1', 'BAT2', 'RUN1', 'RUN2', 'PLAYERS1', 'PLAYERS2']]
df=df[df['PLAYERS1'].apply(lambda x : len(x)==11) & df['PLAYERS2'].apply(lambda x : len(x)==11)]
df.reset_index(drop=True, inplace=True)
Team_1, Team_2, BAT1, BAT2, BOWL1, BOWL2= [], [], [], [], [], []
for t1,t2,b1,b2 in zip(df['Team_1'].tolist(), df['Team_2'].tolist(), df['BAT1'].tolist(), df['BAT2'].tolist()):
if b1==t1:
Team_1.append(t1)
Team_2.append(t2)
else:
Team_1.append(t2)
Team_2.append(t1)
df['Team_1']=Team_1
df['Team_2']=Team_2
df.drop(['BAT1', 'BAT2', 'Venue'],axis=1, inplace=True)
def getStats(code, date):
_date = datetime.strptime(date, "%Y-%m-%d")
if code in batsmen_stats:
i = bisect.bisect_left(batsmen_stats[code][0], _date)-1
if i == -1:
bat = np.zeros(4)
else:
bat = batsmen_stats[code][1][i]
else:
bat = np.zeros(4)
if code in bowlers_stats:
i = bisect.bisect_left(bowlers_stats[code][0], _date)-1
if i == -1:
bowl = np.zeros(5)
else:
bowl = bowlers_stats[code][1][i]
else:
bowl = np.zeros(5)
if int(code) in attdict:
patt = attdict[int(code)]
else:
patt = np.zeros(14)
stats = np.concatenate([bat, bowl, patt])
return stats
def getScores(code, date):
if code in _batsmen_scores and date in _batsmen_scores[code]:
bat = _batsmen_scores[code][date]
else:
bat = np.zeros(4)
if code in _bowlers_scores and date in _bowlers_scores[code]:
bowl = _bowlers_scores[code][date]
else:
bowl = np.zeros(5)
return np.concatenate([bat, bowl])
P1, P2, Dates = df['PLAYERS1'].tolist(), df['PLAYERS2'].tolist(), df['Date'].tolist()
PStats1, PStats2 = [[getStats(p, date) for p in team] for team,date in zip(P1,Dates)], [[getStats(p, date) for p in team] for team,date in zip(P2,Dates)]
PScores1, PScores2 = [[getScores(p, date) for p in team] for team,date in zip(P1,Dates)], [[getScores(p, date) for p in team] for team,date in zip(P2,Dates)]
def getNRR(matchcode):
card = scorecards[matchcode]
run1, run2 = card['SCORES']
overs = sum([int(b[1]) + 10/6*(b[1]-int(b[1])) for b in card['BOWLING2']])
allout = not (len(card['BATTING2'][-1][1])<2 or ('not' in card['BATTING2'][-1][1]))
if allout:
overs=50
return abs((run1/50) - (run2/overs))
df['NRR']=matches['MatchCode'].apply(lambda x : getNRR(str(x)))
df['TEAM1WIN']=0
df['TEAM1WIN'][df['RUN1']>df['RUN2']]=1
df_0=df[df['TEAM1WIN']==0]
df_1=df[df['TEAM1WIN']==1]
df_0['NRR']=-df_0['NRR']
df=(df_0.append(df_1)).sort_index()
nPStats1, nPStats2, nPScores1, nPScores2 = np.array(PStats1), np.array(PStats2), np.array(PScores1), np.array(PScores2)
StatMaxes = np.max(np.concatenate([nPStats1, nPStats2]), axis=(0,1))
dfStats_N1 = nPStats1/StatMaxes
dfStats_N2 = nPStats2/StatMaxes
ScoreMaxes = np.max(np.concatenate([nPScores1, nPScores2]), axis=(0,1))
dfScores_N1 = nPScores1/ScoreMaxes
dfScores_N2 = nPScores2/ScoreMaxes
NRRMax = np.max(df['NRR'])
df['NRR']=df['NRR']/NRRMax
nnPStats1 = np.concatenate([dfStats_N1, dfStats_N2],axis=0)
nnPStats2 = np.concatenate([dfStats_N2, dfStats_N1],axis=0)
nnPScores1 = np.concatenate([dfScores_N1, dfScores_N2],axis=0)
nnPScores2 = np.concatenate([dfScores_N2, dfScores_N1],axis=0)
_NRR = np.concatenate([df['NRR'].values, -df['NRR'].values])
train_idx, test_idx = train_test_split(np.arange(2*len(df)), test_size=0.1)
import torch.nn as nn
import torch
from torch import optim
class AE(nn.Module):
def __init__(self, input_shape=12, output_shape=1, hidden=16, dropout=0.2):
super(AE, self).__init__()
self.hidden = hidden
self.input_shape = input_shape
self.output_shape = output_shape
self.noise = GaussianNoise(sigma=0.1)
self.player_encoder = nn.Sequential(
nn.Linear(input_shape, hidden),
nn.Tanh(),
nn.Dropout(dropout),
nn.Linear(hidden, hidden),
nn.Tanh(),
nn.Dropout(dropout),
)
self.score_regressor = nn.Sequential(
nn.Linear(hidden, 9),
nn.Tanh(),
)
self.decoder = nn.Sequential(
nn.Linear(hidden, input_shape)
)
self.team_encoder = nn.Sequential(
nn.Linear(11*hidden, hidden*4),
nn.Tanh(),
nn.Dropout(dropout),
)
self.nrr_regressor = nn.Sequential(
nn.Linear(hidden*8, hidden*2),
nn.Tanh(),
nn.Dropout(dropout),
nn.Linear(hidden*2, output_shape),
nn.Tanh(),
)
def forward(self, x1, x2):
encoded1, decoded1, scores1 = [], [], []
encoded2, decoded2, scores2 = [], [], []
for i in range(11):
e1 = self.player_encoder(x1[:,i,:])
d1 = self.decoder(e1)
e2 = self.player_encoder(x2[:,i,:])
d2 = self.decoder(e2)
noise = (0.1**0.5)*torch.randn(e1.size())
e1, e2 = e1 + noise, e2 + noise
scores1.append(self.score_regressor(e1))
scores2.append(self.score_regressor(e2))
encoded1.append(e1)
decoded1.append(d1)
encoded2.append(e2)
decoded2.append(d2)
team1, team2 = self.team_encoder(torch.cat(tuple(encoded1), axis=1)), self.team_encoder(torch.cat(tuple(encoded2), axis=1))
out = self.nrr_regressor(torch.cat((team1, team2), axis=1))
decoded=torch.cat(tuple(decoded1 + decoded2), axis=1)
scores1=torch.cat(tuple(scores1),axis=1)
scores2=torch.cat(tuple(scores2),axis=1)
return decoded, out, scores1, scores2
model = AE(dropout=0.3)
criterion = nn.MSELoss()
ED_Loss_train, NRR_Loss_train, Player_Loss_train = [], [], []
ED_Loss_test, NRR_Loss_test, Player_Loss_test = [], [], []
optimizer = optim.RMSprop(model.parameters(), lr=3e-4, )
epochs = 10000
for epoch in range(1,epochs+1):
model.train()
inputs1 = torch.FloatTensor(nnPStats1[:,:,:12][train_idx])
inputs2 = torch.FloatTensor(nnPStats2[:,:,:12][train_idx])
outputs = torch.FloatTensor(_NRR[train_idx].reshape(-1,1))
optimizer.zero_grad()
decoded, out, scores1, scores2 = model(inputs1, inputs2)
inp = (inputs1).view(train_idx.shape[0], -1), (inputs2).view(train_idx.shape[0], -1)
loss1 = criterion(decoded, torch.cat(inp, axis=1))
loss2 = criterion(out, outputs)
loss3 = criterion(scores1, torch.FloatTensor(nnPScores1[train_idx]).view(train_idx.shape[0], -1))
loss4 = criterion(scores2, torch.FloatTensor(nnPScores2[train_idx]).view(train_idx.shape[0], -1))
loss = 1e-5*loss1 + 1*loss2 + 1e-3*(loss3 + loss4)
loss.backward()
ED_Loss_train.append(loss1.item())
NRR_Loss_train.append(loss2.item())
Player_Loss_train.append((loss3.item()+loss4.item())/2)
optimizer.step()
if epoch%100==0:
print(f"Epoch {epoch}/{epochs}")
print("Train Losses Decoder: %0.3f NRR: %0.3f Player Performance %0.3f" % (loss1.item(), loss2.item(), (loss3.item()+loss4.item())/2))
model.eval()
inputs1 = torch.FloatTensor(nnPStats1[:,:,:12][test_idx])
inputs2 = torch.FloatTensor(nnPStats2[:,:,:12][test_idx])
outputs = torch.FloatTensor(_NRR[test_idx].reshape(-1,1))
decoded, out, scores1, scores2 = model(inputs1, inputs2)
inp = (inputs1).view(test_idx.shape[0], -1), (inputs2).view(test_idx.shape[0], -1)
loss1 = criterion(decoded, torch.cat(inp, axis=1))
loss2 = criterion(out, outputs)
loss3 = criterion(scores1, torch.FloatTensor(nnPScores1[test_idx]).view(test_idx.shape[0], -1))
loss4 = criterion(scores2, torch.FloatTensor(nnPScores2[test_idx]).view(test_idx.shape[0], -1))
ED_Loss_test.append(loss1.item())
print("Validation Losses Decoder: %0.3f NRR: %0.3f Player Performance: %0.3f" % (loss1.item(), loss2.item(), (loss3.item()+loss4.item())/2))
NRR_Loss_test.append(loss2.item())
out, outputs = out.detach().numpy(), outputs.detach().numpy()
Player_Loss_test.append((loss3.item()+loss4.item())/2)
acc=100*np.sum((out*outputs)>0)/out.shape[0]
print("Val Accuracy: %0.3f" % acc)
sns.lineplot(x=np.arange(1,10001), y=ED_Loss_train)
sns.lineplot(x=np.arange(1,10001,50), y=ED_Loss_test)
sns.lineplot(x=np.arange(1,10001), y=NRR_Loss_train)
sns.lineplot(x=np.arange(1,10001,50), y=NRR_Loss_test)
sns.lineplot(x=np.arange(1,10001), y=Player_Loss_train)
sns.lineplot(x=np.arange(1,10001,50), y=Player_Loss_test)
```
|
github_jupyter
|
# Multivariate Dependencies Beyond Shannon Information
This is a companion Jupyter notebook to the work *Multivariate Dependencies Beyond Shannon Information* by Ryan G. James and James P. Crutchfield. This worksheet was written by Ryan G. James. It primarily makes use of the ``dit`` package for information theory calculations.
## Basic Imports
We first import basic functionality. Further functionality will be imported as needed.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from dit import ditParams, Distribution
from dit.distconst import uniform
ditParams['repr.print'] = ditParams['print.exact'] = True
```
## Distributions
Here we define the two distributions to be compared.
```
from dit.example_dists.mdbsi import dyadic, triadic
dists = [('dyadic', dyadic), ('triadic', triadic)]
```
## I-Diagrams and X-Diagrams
Here we construct the I- and X-Diagrams of both distributions. The I-Diagram is constructed by considering how the entropies of each variable interact. The X-Diagram is similar, but considers how the extropies of each variable interact.
```
from dit.profiles import ExtropyPartition, ShannonPartition
def print_partition(dists, partition):
ps = [str(partition(dist)).split('\n') for _, dist in dists ]
print('\t' + '\t\t\t\t'.join(name for name, _ in dists))
for lines in zip(*ps):
print('\t\t'.join(lines))
print_partition(dists, ShannonPartition)
```
Both I-Diagrams are the same. This implies that *no* Shannon measure (entropy, mutual information, conditional mutual information [including the transfer entropy], co-information, etc) can differentiate these patterns of dependency.
```
print_partition(dists, ExtropyPartition)
```
Similarly, the X-Diagrams are identical and so no extropy-based measure can differentiate the distributions.
## Measures of Mutual and Common Information
We now compute several measures of mutual and common information:
```
from prettytable import PrettyTable
from dit.multivariate import (entropy,
coinformation,
total_correlation,
dual_total_correlation,
independent_information,
caekl_mutual_information,
interaction_information,
intrinsic_total_correlation,
gk_common_information,
wyner_common_information,
exact_common_information,
functional_common_information,
mss_common_information,
tse_complexity,
)
from dit.other import (extropy,
disequilibrium,
perplexity,
LMPR_complexity,
renyi_entropy,
tsallis_entropy,
)
def print_table(title, table, dists):
pt = PrettyTable(field_names = [''] + [name for name, _ in table])
for name, _ in table:
pt.float_format[name] = ' 5.{0}'.format(3)
for name, dist in dists:
pt.add_row([name] + [measure(dist) for _, measure in table])
print("\n{}".format(title))
print(pt.get_string())
```
### Entropies
Entropies generally capture the uncertainty contained in a distribution. Here, we compute the Shannon entropy, the Renyi entropy of order 2 (also known as the collision entropy), and the Tsallis entropy of order 2. Though we only compute the order 2 values, any order will produce values identical for both distributions.
```
entropies = [('H', entropy),
('Renyi (α=2)', lambda d: renyi_entropy(d, 2)),
('Tsallis (q=2)', lambda d: tsallis_entropy(d, 2)),
]
print_table('Entropies', entropies, dists)
```
The entropies for both distributions are indentical. This is not surprising: they have the same probability mass function.
### Mutual Informations
Mutual informations are multivariate generalizations of the standard Shannon mutual information. By far, the most widely used (and often simply assumed to be the only) generalization is the total correlation, sometimes called the multi-information. It is defined as:
$$
T[\mathbf{X}] = \sum H[X_i] - H[\mathbf{X}] = \sum p(\mathbf{x}) \log_2 \frac{p(\mathbf{x})}{p(x_1)p(x_2)\ldots p(x_n)}
$$
Other generalizations exist, though, including the co-information, the dual total correlation, and the CAEKL mutual information.
```
mutual_informations = [('I', coinformation),
('T', total_correlation),
('B', dual_total_correlation),
('J', caekl_mutual_information),
('II', interaction_information),
]
print_table('Mutual Informations', mutual_informations, dists)
```
The equivalence of all these generalizations is not surprising: Each of them can be defined as a function of the I-diagram, and so must be identical here.
### Common Informations
Common informations are generally defined using an auxilliary random variable which captures some amount of information shared by the variables of interest. For all but the Gács-Körner common information, that shared information is the dual total correlation.
```
common_informations = [('K', gk_common_information),
('C', lambda d: wyner_common_information(d, niter=1, polish=False)),
('G', lambda d: exact_common_information(d, niter=1, polish=False)),
('F', functional_common_information),
('M', mss_common_information),
]
print_table('Common Informations', common_informations, dists)
```
As it turns out, only the Gács-Körner common information, `K`, distinguishes the two.
### Other Measures
Here we list a variety of other information measures.
```
other_measures = [('IMI', lambda d: intrinsic_total_correlation(d, d.rvs[:-1], d.rvs[-1])),
('X', extropy),
('R', independent_information),
('P', perplexity),
('D', disequilibrium),
('LMRP', LMPR_complexity),
('TSE', tse_complexity),
]
print_table('Other Measures', other_measures, dists)
```
Several other measures fail to differentiate our two distributions. For many of these (`X`, `P`, `D`, `LMRP`) this is because they are defined relative to the probability mass function. For the others, it is due to the equality of the I-diagrams. Only the intrinsic mutual information, `IMI`, can distinguish the two.
## Information Profiles
Lastly, we consider several "profiles" of the information.
```
from dit.profiles import *
def plot_profile(dists, profile):
n = len(dists)
plt.figure(figsize=(8*n, 6))
ent = max(entropy(dist) for _, dist in dists)
for i, (name, dist) in enumerate(dists):
ax = plt.subplot(1, n, i+1)
profile(dist).draw(ax=ax)
if profile not in [EntropyTriangle, EntropyTriangle2]:
ax.set_ylim((-0.1, ent + 0.1))
ax.set_title(name)
```
### Complexity Profile
```
plot_profile(dists, ComplexityProfile)
```
Once again, these two profiles are identical due to the I-Diagrams being identical. The complexity profile incorrectly suggests that there is no information at the scale of 3 variables.
### Marginal Utility of Information
```
plot_profile(dists, MUIProfile)
```
The marginal utility of information is based on a linear programming problem with constrains related to values from the I-Diagram, and so here again the two distributions are undifferentiated.
### Connected Informations
```
plot_profile(dists, SchneidmanProfile)
```
The connected informations are based on differences between maximum entropy distributions with differing $k$-way marginal distributions fixed. Here, the two distributions are differentiated
### Multivariate Entropy Triangle
```
plot_profile(dists, EntropyTriangle)
```
Both distributions are at an idential location in the multivariate entropy triangle.
## Partial Information
We next consider a variety of partial information decompositions.
```
from dit.pid.helpers import compare_measures
for name, dist in dists:
compare_measures(dist, name=name)
```
Here we see that the PID determines that in dyadic distribution two random variables uniquely contribute a bit of information to the third, whereas in the triadic distribution two random variables redundantly influene the third with one bit, and synergistically with another.
## Multivariate Extensions
```
from itertools import product
outcomes_a = [
(0,0,0,0),
(0,2,3,2),
(1,0,2,1),
(1,2,1,3),
(2,1,3,3),
(2,3,0,1),
(3,1,1,2),
(3,3,2,0),
]
outcomes_b = [
(0,0,0,0),
(0,0,1,1),
(0,1,0,1),
(0,1,1,0),
(1,0,0,1),
(1,0,1,0),
(1,1,0,0),
(1,1,1,1),
]
outcomes = [ tuple([2*a+b for a, b in zip(a_, b_)]) for a_, b_ in product(outcomes_a, outcomes_b) ]
quadradic = uniform(outcomes)
dyadic2 = uniform([(4*a+2*c+e, 4*a+2*d+f, 4*b+2*c+f, 4*b+2*d+e) for a, b, c, d, e, f in product([0,1], repeat=6)])
dists2 = [('dyadic2', dyadic2), ('quadradic', quadradic)]
print_partition(dists2, ShannonPartition)
print_partition(dists2, ExtropyPartition)
print_table('Entropies', entropies, dists2)
print_table('Mutual Informations', mutual_informations, dists2)
print_table('Common Informations', common_informations, dists2)
print_table('Other Measures', other_measures, dists2)
plot_profile(dists2, ComplexityProfile)
plot_profile(dists2, MUIProfile)
plot_profile(dists2, SchneidmanProfile)
plot_profile(dists2, EntropyTriangle)
```
|
github_jupyter
|
```
%run technical_trading.py
#%%
data = pd.read_csv('../../data/hs300.csv',index_col = 'date',parse_dates = 'date')
data.vol = data.vol.astype(float)
#start = pd.Timestamp('2005-09-01')
#end = pd.Timestamp('2012-03-15')
#data = data[start:end]
#%%
chaikin = CHAIKINAD(data, m = 14, n = 16)
kdj = KDJ(data)
adx = ADX(data)
emv = EMV(data, n = 20, m = 23)
cci = CCI(data, n=20, m = 8)
bbands = BBANDS(data, n =20, m=2)
aroon = AROON(data)
cmo = CMO(data)
#%%
signal = pd.DataFrame(index=data.index)
#signal['kdj'] = kdj['2']
signal['chaikin'] = chaikin['3']
signal['emv'] = emv['2']
signal['adx'] = adx['1']
signal['cci'] = cci['2']
signal['aroon'] = aroon['2']
signal['cmo'] = cmo['2']
signal['bbands'] = bbands['1']
signal = signal.fillna(0)
returns_c = Backtest(data, signal.mode(axis=1).ix[:,0])
(1+returns_c).cumprod().plot()
#%%
oos_date = pd.Timestamp('2012-03-15')
#pf.create_returns_tear_sheet(returns, live_start_date=oos_date)
pf.create_full_tear_sheet(returns_c)
#%%
%matplotlib inline
(1+returns_c).cumprod().plot()
returns = pd.DataFrame(index=data.index)
#signal['kdj'] = kdj['2']
returns['chaikin'] = np.array(Backtest(data, chaikin['3']))
returns['emv'] = np.array(Backtest(data, emv['2']))
returns['adx'] = np.array(Backtest(data, adx['1']))
returns['cci'] = np.array(Backtest(data, cci['2']))
returns['aroon'] = np.array(Backtest(data, aroon['2']))
returns['cmo'] = np.array(Backtest(data, cmo['2']))
returns['bbands'] = np.array(Backtest(data, bbands['1']))
returns = returns.fillna(0)
(1+returns['chaikin']).cumprod().plot()
nav = pd.DataFrame()
nav['combined'] = (1+returns_c).cumprod()
ema5 = talib.EMA(np.array(nav['combined']), 5)
ema20 = talib.EMA(np.array(nav['combined']), 20)
signal5 = (nav['combined'] > ema5) * 1 + (nav['combined']<ema5) *0
signal20 = (nav['combined'] > ema20) * 1 + (nav['combined']<ema20) * 0
signal5_20 = (ema5 > ema20) * 1 + (ema20 < ema5)*0
return_ema5 = returns_c * signal5.shift(1)
return_ema20 = returns_c * signal20.shift(1)
nav['ema5'] = (1+return_ema5).cumprod()
nav['ema20'] = (1+return_ema20).cumprod()
#nav['ema5_20'] = (1+retrun_ema5_20).cumprod()
nav.plot()
(1+returns.sum(1)/4).cumprod().plot()
ret_target = returns.sum(1) / 4
ret_target.index = data.index.tz_localize('UTC')
pf.create_full_tear_sheet(ret_target)
%run ../Strategy_Evalution_Tools/turtle_evalution.py
# This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see <http://www.gnu.org/licenses/>.
import array
import random
import numpy
from deap import algorithms
from deap import base
from deap import creator
from deap import tools
### insample vs. oos
returns_is = returns.ix[:, :]
returns_oos = returns.ix[1001::, :]
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", array.array, typecode='b', fitness=creator.FitnessMax)
toolbox = base.Toolbox()
# Attribute generator
toolbox.register("attr_bool", random.randint, 0, 1)
# Structure initializers
toolbox.register("individual", tools.initRepeat, creator.Individual, toolbox.attr_bool, 7)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
def evalOneMax(individual):
print(individual)
for i in range(7) :
if i == 0:
rets = returns_is.ix[:,i] * individual[i]
else :
rets = rets + returns_is.ix[:,i] * individual[i]
rets = rets.fillna(0)
sharpe, rsharpe = Sharpe(rets)
rrr = RRR(rets)
if np.isnan(rsharpe) :
rsharpe = 0
print(rsharpe)
return rsharpe,;
toolbox.register("evaluate", evalOneMax)
toolbox.register("mate", tools.cxTwoPoint)
toolbox.register("mutate", tools.mutFlipBit, indpb=0.05)
toolbox.register("select", tools.selTournament, tournsize=3)
def main():
random.seed(64)
pop = toolbox.population(n=128)
hof = tools.HallOfFame(2)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", numpy.mean)
stats.register("std", numpy.std)
stats.register("min", numpy.min)
stats.register("max", numpy.max)
pop, log = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2, ngen=10,
stats=stats, halloffame=hof, verbose=True)
#print(log)
return pop, log, hof
if __name__ == "__main__":
pop, log, hof = main()
pop
hof.items
import operator
from deap import base
from deap import creator
from deap import gp
from deap import tools
pset = gp.PrimitiveSet("MAIN", arity=1)
pset.addPrimitive(operator.add, 2)
pset.addPrimitive(operator.sub, 2)
pset.addPrimitive(operator.mul, 2)
creator.create("FitnessMin", base.Fitness, weights=(-1.0,))
creator.create("Individual", gp.PrimitiveTree, fitness=creator.FitnessMin,
pset=pset)
toolbox = base.Toolbox()
toolbox.register("expr", gp.genHalfAndHalf, pset=pset, min_=1, max_=2)
toolbox.register("individual", tools.initIterate, creator.Individual,
toolbox.expr)
### insample and out-of-sample test
data = pd.read_csv('../../data/hs300.csv',index_col = 'date',parse_dates = 'date')
data.vol = data.vol.astype(float)
a.
```
|
github_jupyter
|
```
import numpy as np
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
from nn_interpretability.interpretation.lrp.lrp_0 import LRP0
from nn_interpretability.interpretation.lrp.lrp_eps import LRPEpsilon
from nn_interpretability.interpretation.lrp.lrp_gamma import LRPGamma
from nn_interpretability.interpretation.lrp.lrp_ab import LRPAlphaBeta
from nn_interpretability.interpretation.lrp.lrp_composite import LRPMix
from nn_interpretability.model.model_trainer import ModelTrainer
from nn_interpretability.model.model_repository import ModelRepository
from nn_interpretability.visualization.mnist_visualizer import MnistVisualizer
from nn_interpretability.dataset.mnist_data_loader import MnistDataLoader
model_name = 'model_cnn.pt'
train = False
mnist_data_loader = MnistDataLoader()
MnistVisualizer.show_dataset_examples(mnist_data_loader.trainloader)
model = ModelRepository.get_general_mnist_cnn(model_name)
if train:
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.0005)
model.train()
ModelTrainer.train(model, criterion, optimizer, mnist_data_loader.trainloader)
ModelRepository.save(model, model_name)
```
# I. LRP-0
```
images = []
for i in range(10):
img = mnist_data_loader.get_image_for_class(i)
# LRP0(model, target_class, transforms, visualize_layer)
interpretor = LRP0(model, i, None, 0)
endpoint = interpretor.interpret(img)
images.append(endpoint[0].detach().cpu().numpy().sum(axis=0))
MnistVisualizer.display_heatmap_for_each_class(images)
```
## Comparison between LRP gradient and LRP convolution transpose implementation
For **convolution layers** there is no difference, we will obtain the same numerical results with either approach. However, for **pooling layers** the result from convolution transpose approach is **4^(n)** as large as for those from gradient approach, where n is the number of pooling layers. The reason is because in every average unpooling operation, s will be unpooled directly without multiplying any scaling factor. For gradient approach, every input activation influence the output equally therefore the gradient for every activation entrices is 0.25. The operation is an analog of first unpooling and then multiplying a scale of 0.25 to s.
The gradient approach will be more reasonable to the equation described in Montavon's paper. As we treat pooling layers like convolutional layers, the scaling factor 0.25 from pooling should be considered in the steps that we multiply weights in convolutional layers (step1 and step3).
# II. LRP-ε
```
images = []
for i in range(10):
img = mnist_data_loader.get_image_for_class(i)
# LRPEpsilon(model, target_class, transforms, visualize_layer)
interpretor = LRPEpsilon(model, i, None, 0)
endpoint = interpretor.interpret(img)
images.append(endpoint[0].detach().cpu().numpy().sum(axis=0))
MnistVisualizer.display_heatmap_for_each_class(images)
```
# III. LRP- γ
```
images = []
for i in range(10):
img = mnist_data_loader.get_image_for_class(i)
# LRPGamma(model, target_class, transforms, visualize_layer)
interpretor = LRPGamma(model, i, None, 0)
endpoint = interpretor.interpret(img)
images.append(endpoint[0].detach().cpu().numpy().sum(axis=0))
MnistVisualizer.display_heatmap_for_each_class(images)
```
# IV. LRP-αβ
## 1. LPP-α1β0
```
images = []
for i in range(10):
img = mnist_data_loader.get_image_for_class(i)
# LRPAlphaBeta(model, target_class, transforms, alpha, beta, visualize_layer)
interpretor = LRPAlphaBeta(model, i, None, 1, 0, 0)
endpoint = interpretor.interpret(img)
images.append(endpoint[0].detach().cpu().numpy().sum(axis=0))
MnistVisualizer.display_heatmap_for_each_class(images)
```
## 2. LPP-α2β1
```
images = []
img_shape = (28, 28)
for i in range(10):
img = mnist_data_loader.get_image_for_class(i)
# LRPAlphaBeta(model, target_class, transforms, alpha, beta, visualize_layer)
interpretor = LRPAlphaBeta(model, i, None, 2, 1, 0)
endpoint = interpretor.interpret(img)
images.append(endpoint[0].detach().cpu().numpy().sum(axis=0))
MnistVisualizer.display_heatmap_for_each_class(images)
```
# IV. Composite LRP
```
images = []
img_shape = (28, 28)
for i in range(10):
img = mnist_data_loader.get_image_for_class(i)
# LRPMix(model, target_class, transforms, alpha, beta, visualize_layer)
interpretor = LRPMix(model, i, None, 1, 0, 0)
endpoint = interpretor.interpret(img)
images.append(endpoint[0].detach().cpu().numpy().sum(axis=0))
MnistVisualizer.display_heatmap_for_each_class(images)
```
|
github_jupyter
|
# Project 1: Linear Regression Model
This is the first project of our data science fundamentals. This project is designed to solidify your understanding of the concepts we have learned in Regression and to test your knowledge on regression modelling. There are four main objectives of this project.
1\. Build Linear Regression Models
* Use closed form solution to estimate parameters
* Use packages of choice to estimate parameters<br>
2\. Model Performance Assessment
* Provide an analytical rationale with choice of model
* Visualize the Model performance
* MSE, R-Squared, Train and Test Error <br>
3\. Model Interpretation
* Intepret the results of your model
* Intepret the model assement <br>
4\. Model Dianostics
* Does the model meet the regression assumptions
#### About this Notebook
1\. This notebook should guide you through this project and provide started code
2\. The dataset used is the housing dataset from Seattle homes
3\. Feel free to consult online resources when stuck or discuss with data science team members
Let's get started.
### Packages
Importing the necessary packages for the analysis
```
# Necessary Packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Model and data preprocessing
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.svm import SVR
from sklearn.feature_selection import RFE
from sklearn import preprocessing
%matplotlib inline
```
Now that you have imported your packages, let's read the data that we are going to be using. The dataset provided is a titled *housing_data.csv* and contains housing prices and information about the features of the houses. Below, read the data into a variable and visualize the top 8 rows of the data.
```
# Initiliazing seed
np.random.seed(42)
data1 = pd.read_csv('housing_data.csv')
data = pd.read_csv('housing_data_2.csv')
data.head(8)
```
### Split data into train and test
In the code below, we need to split the data into the train and test for modeling and validation of our models. We will cover the Train/Validation/Test as we go along in the project. Fill the following code.
1\. Subset the features to the variable: features <br>
2\. Subset the target variable: target <br>
3\. Set the test size in proportion in to a variable: test_size <br>
```
features = data[['lot_area', 'firstfloor_sqft', 'living_area', 'bath', 'garage_area', 'price']]
target = data['price']
test_size = .33
x_train, x_test, y_train, y_test = train_test_split(features, target, test_size=test_size, random_state=42)
```
### Data Visualization
The best way to explore the data we have is to build some plots that can help us determine the relationship of the data. We can use a scatter matrix to explore all our variables. Below is some starter code to build the scatter matrix
```
features = pd.plotting.scatter_matrix(x_train, figsize=(14,8), alpha=1, diagonal='kde')
#columns = pd.plotting.scatter_matrix(columns, figsize=(14,8), alpha=1, diagonal='kde')
```
Based on the scatter matrix above, write a brief description of what you observe. In thinking about the description, think about the relationship and whether linear regression is an appropriate choice for modelling this data.
#### a. lot_area
My initial intutions tell me that lot_area would be the best indicator of price; that being said, there is a weak correlation between lot_area and the other features, which is a good sign! However, the distribution is dramatically skewed-right indicating that the mean lot_area is greater than the median. This tells me that lot_area stays around the same size while price increases. In turn, that tells me that some other feature is helping determine the price bceause if lot_area we're determining the increase in price, we'd see a linear distribution. In determining the best feature for my linear regression model, I think lot_area may be one of the least fitting to use.
#### b. firstfloor_sqft
There is a stronger correlation between firstfloor_sqft and the other features. The distrubution is still skewed-right making the median a better measure of center. firstfloor_sqft would be a good candidate for the linear regression model becuse of the stronger correlation and wider distribution; however, there appears to be a overly strong, linear correlation between firstfloor_sqft and living_area. Given that this linear correlation goes against the Regression Assumption that "all inputs are linearly independent," I would not consider using both in my model. I could, however, use one or the other.
#### c. living_area
There is a similarly strong correlation between living_area (as compared to firstfloor_sqft) and the other features, but these plots are better distributed than firstfloor_sqft. A right skew still exists, but less so than the firstfloor_sqft. However, the observation of a strong, linear correlation between firstfloor_sqft and living_area (or living_area and firstfloor_sqft) is reinforced here. Thus, I would not use both of these in my final model and having to choose between the two, I will likely choose living_area since it appears to be more well-distributed.
#### d. bath
Baths are static numbers, so the plots are much less distributed; however, the length and the clustering of the bath to living_area & bath to garage_area may indicate a correlation. Since I cannot use both living_area and firstfloor_sqft, and I think living_area has a better distribution, I would consider using bath in conjunction with living_area.
#### e. garage_area
Garage_area appears to be well-distributed with the lowest correlation between the other features. This could make it a great fit for the final regression model. It's also the least skewed right distribution.
#### Correlation Matrix
In the code below, compute the correlation matrix and write a few thoughts about the observations. In doing so, consider the interplay in the features and how their correlation may affect your modeling.
The correlation matrix below is in-line with my thought process. Lot_area has the lowest correlation between it and the other features, but it's not well distributed. firstfloor_sqft has a strong correlation between it and living_area. Given that the correlation is just over 0.5, both features may be able to be used in the model given that the correlation isn't overly strong; however, to be most accurate, I plan to leave out one of them (likely firstfloor_sqft). living_area also reflects this strong correlation between it and firstfloor_sqft. Surprisingly, there is a strong correlation between living_area and bath. Looking solely at the scatter matrix, I did not see this strong correlation. This changes my approach slighltly, which I will outline below. garage_area, again, has the lowest correlations while being the most well-distributed.
#### Approach
Given this new correlation information, I will approach the regression model in one of the following ways:
1. Leave out bath as a feature and use living_area + garage_area.
2. Swap firstfloor_sqft for living_area and include bath + garage area.
#### Conclusion
I'm not 100% sure if more features are better than less in this situation; however, I am sure that I want linearly independet features.
```
# Use pandas correlation function
x_train.corr(method='pearson').style.format("{:.2}").background_gradient(cmap=plt.get_cmap('coolwarm'), axis=1)
```
## 1. Build Your Model
Now that we have explored the data at a high level, let's build our model. From our sessions, we have discussed both closed form solution, gradient descent and using packages. In this section you will create your own estimators. Starter code is provided to makes this easier.
#### 1.1. Closed Form Solution
Recall: <br>
$$\beta_0 = \bar {y} - \beta_1 \bar{x}$$ <br>
$$\beta_1 = \frac {cov(x, y)} {var(x)}$$ <br>
Below, let's define functions that will compute these parameters
```
# Pass the necessary arguments in the function to calculate the coefficients
def compute_estimators(feature, target):
n1 = np.sum(feature*target) - np.mean(target)*np.sum(feature)
d1 = np.sum(feature*feature) - np.mean(feature)*np.sum(feature)
# Compute the Intercept and Slope
beta1 = n1/d1
beta0 = np.mean(target) - beta1*np.mean(feature)
return beta0, beta1 # Return the Intercept and Slope
```
Run the compute estimators function above and display the estimated coefficients for any of the predictors/input variables.
```
# Remember to pass the correct arguments
x_array = np.array(data1['living_area'])
normalized_X = preprocessing.normalize([x_array])
beta0, beta1 = compute_estimators(normalized_X, data1['price'])
print(beta0, beta1)
#### Computing coefficients for our model by hand using the actual mathematical equations
#y = beta1x + beta0
#print(y)
```
#### 1.2. sklearn solution
Now that we know how to compute the estimators, let's leverage the sklearn module to compute the metrics for us. We have already imported the linear model, let's initialize the model and compute the coefficients for the model with the input above.
```
# Initilize the linear Regression model here
model = linear_model.LinearRegression()
# Pass in the correct inputs
model.fit(data1[['living_area']], data1['price'])
# Print the coefficients
print("This is beta0:", model.intercept_)
print("This is beta1:", model.coef_)
#### Computing coefficients for our model using the sklearn package
```
Do the results from the cell above and your implementation match? They should be very close to each other.
#### Yes!! They match!
### 2. Model Evaluation
Now that we have estimated our single model. We are going to compute the coefficients for all the inputs. We can use a for loop for multiple model estimation. However, we need to create a few functions:
1\. Prediction function: Functions to compute the predictions <br>
2\. MSE: Function to compute Mean Square Error <br>
```
#Function that computes predictions of our model using the betas above + the feature data we've been using
def model_predictions(intercept, slope, feature):
""" Compute Model Predictions """
y_hat = intercept+(slope*feature)
return y_hat
y_hat = model_predictions(beta0, beta1, data1['living_area'])
#Function to compute MSE which determines the total loss for each predicted data point in our model
def mean_square_error(y_outcome, predictions):
""" Compute the mean square error """
mse = (np.sum((y_outcome - predictions) ** 2))/np.size(predictions)
return mse
mse = mean_square_error(target, y_hat)
print(mse)
```
The last function we need is a plotting function to visualize our predictions relative to our data.
```
#Function used to plot the data
def plotting_model(feature, target, predictions, name):
""" Create a scatter and predictions """
fig = plt.figure(figsize=(10,8))
plot_model = model.fit(feature, target)
plt.scatter(x=feature, y=target, color='blue')
plt.plot(feature, predictions, color='red')
plt.xlabel(name)
plt.ylabel('Price')
return model
model = plotting_model(data1[['living_area']], data1['price'], y_hat, data1['living_area'].name)
```
## Considerations/Reasoning
#### Data Integrity
After my inital linear model based on the feature "living area," I've eliminated 8 data points. If you look at the graph above, there are 4 outliers that are clear, and at least 4 others that follow a similar trend based on the x, y relationship. I used ~3500 sqft of living area as my cutoff for being not predictive of the model, and any price above 600000. Given the way these data points skew the above model, they intuitively appear to be outliers with high leverage. I determined this by comparing these high leverag points with points similar to it in someway and determined whether it was an outlier (i.e. if point A's price was abnormally high, I found a point (B) with living area at or close to point A's living area and compared the price. vice versa if living area was abnormally high).
#### Inital Feature Analysis - "Best" Feature (a priori)
Living area is the best metric to use to train the linear model because it incorporates multiple of the other features within it: first floor living space & bath. Living area has a high correlation with both first floor sq ft (0.53) and baths (0.63). Based on the other correlations, these are the two highest, and thus should immediately be eliminated. Additionally, based on initial intuition, one would assume that an increase in the metric "firstfloor sqft" will lead to an increase in the "living area" metric; if both firstfloor sqft and overall living area are increased, the "bath" metric will likely also increase to accommodate the additional living area/sqft in a home. Thus, I will not need to use them in my model because these can be accurately represented by the feature "living area."
### Single Feature Assessment
```
#Running each feature through to determine which has best linear fit
features = data[['living_area', 'garage_area', 'lot_area', 'firstfloor_sqft', 'bath']]
count = 0
for feature in features:
feature = features.iloc[:, count]
# Compute the Coefficients
beta0, beta1 = compute_estimators(feature, target)
count+=1
# Print the Intercept and Slope
print(feature.name)
print('beta0:', beta0)
print('beta1:', beta1)
# Compute the Train and Test Predictions
y_hat = model_predictions(beta0, beta1, feature)
# Plot the Model Scatter
name = feature.name
model = plotting_model(feature.values.reshape(-1, 1), target, y_hat, name)
# Compute the MSE
mse = mean_square_error(target, y_hat)
print('mean squared error:', mse)
print()
```
#### Analysis of Feature Linear Models
After eliminating these 8 data points, MSE for Living Area drop significantly from 8957196059.803959 to 2815789647.7664313. In fact, Living Area has the lowest MSE 2815789647.7664313 of all the individual models, and the best linear fit.
Garage Area is the next lowest MSE 3466639234.8407283, and the model is mostly linear; however, the bottom left of the model is concerning. You'll notice that a large number of data points go vertically upward indicating an increase in price with 0 garage area. That says to me that garage area isn't predicting the price of these homes, which indicates that it may be a good feature to use in conjunction with another feature (i.e. Living Area) or since those data points do not fit in with the rest of the population, they may need to be removed.
#### Run Model Assessment
Now that we have our functions ready, we can build individual models, compute preductions, plot our model results and determine our MSE. Notice that we compute our MSE on the test set and not the train set
### Dot Product (multiple feature) Assessment
```
#Models Living Area alone and compares it to the Dot Product of Living Area with each other feature
##Determining if a MLR would be a better way to visualize the data
features = data[['living_area', 'garage_area', 'lot_area', 'firstfloor_sqft', 'bath']]
count = 0
for feature in features:
feature = features.iloc[:, count]
#print(feature.head(0))
if feature.name == 'living_area':
x = data['living_area']
else:
x = feature * data['living_area']
# Compute the Coefficients
beta0, beta1 = compute_estimators(x, target)
# Print the Intercept and Slope
if feature.name == 'living_area':
print('living_area')
print('beta0:', beta0)
print('beta1:', beta1)
else:
print(feature.name, "* living_area")
print('beta0:', beta0)
print('beta1:', beta1)
# Compute the Train and Test Predictions
y_hat = model_predictions(beta0, beta1, x)
# Plot the Model Scatter
if feature.name == 'living_area':
name = 'living_area'
else:
name = feature.name + " " + "* living_area"
model = plotting_model(x.values.reshape(-1, 1), target, y_hat, name)
# Compute the MSE
mse = mean_square_error(target, y_hat)
print('mean squared error:', mse)
print()
count+=1
```
## Analysis
Based on the models, it appears that two of the dot products provide a more accurate model:
1. Living Area * First Floor SqFt
2. Living Area * Garage Area
These two dot products provide a lower MSE and thus lowers the loss per prediction point.
#1.
My intuition says that since Living Area, as a feature, will include First Floor SqFt in its data. The FirstFloor SqFt can be captured by Living Area, so it can be left out. Additionally, since one is included within the other, we cannot say anything in particular about Living Area or FirstFloor SqFt individually. Also, the correlation (Ln 24 & Out 24) between Living Area and FirstFloor SqFt is 0.53, which is the highest apart from Bath. This correlation is low in comparison to the "standard;" however, that standard is arbitrary. I've lowered it to be in context with data sets I'm working with in this notebook.
#2.
The dot product of Living Area & Garage Area provides doesn't allow us to make a statement about each individually, unless we provide a model of each, which I will do below. This dot product is a better model. Garage Area is advertised as 'bonus' space and CANNOT be included in the overall square footage of the home (i.e. living area). Thus, garage area vector will not be included as an implication within the living area vector making them linearly independent.
Garage Area can be a sought after feature depending on a buyer's desired lifestlye; more garage space would be sought after by buyers with more cars, which allows us to draw a couple possible inferences about the buyers:
1. enough net worth/monthly to make payments on multiple vehicles plus make payments on a house/garage
2. enough disposable income to outright buy multiple vehicles plus make payments on a house/garage
Additionally, it stands to reason that garage area would scale with living area for pragmatic reasons (more living area implies more people and potentially more vehicles) and for aesthetic reasons (more living area makes home look larger and would need larger garage).
Homes with more living area and garage area may be sought after by buyers with the ability to spend more on a home, and thus the market would bear a higher price for those homes, which helps explain why living area * garage area is a better indicator of home price.
#### Conclusion
Combining living area with other features lowered the MSE for each. The lowest MSE is living area * garage area, which confirms my hypothesis: Living Area is the best feature to predict price, and garage area is good when used in conjunction.
```
#Modeling Living Area & Garage Area separately.
features = data[['living_area', 'garage_area']]
count = 0
for feature in features:
feature = features.iloc[:, count]
if feature.name == 'living_area':
x = data['living_area']
elif feature.name == 'garage_area':
x = data['garage_area']
beta0, beta1 = compute_estimators(x, target)
count+=1
if feature.name == 'living_area':
print('living_area')
print('beta0:', beta0)
print('beta1:', beta1)
elif feature.name == 'garage_area':
print('garage_area')
print('beta0:', beta0)
print('beta1:', beta1)
y_hat = model_predictions(beta0, beta1, x)
if feature.name == 'living_area':
name = 'living_area'
elif feature.name == 'garage_area':
name = 'garage_area'
model = plotting_model(x.values.reshape(-1, 1), target, y_hat, name)
mse = mean_square_error(target, y_hat)
print('mean squared error:', mse)
print()
#Modeling dot product of Living Area * Garage Area
features = data[['living_area']]
x = features.iloc[:, 0]
x2 = x * data['garage_area']
#x3 = x2 * data['bath']
# Compute the Coefficients
beta0, beta1 = compute_estimators(x2, target)
# Print the Intercept and Slope
print('Name: garage_area * living_area')
print('beta0:', beta0)
print('beta1:', beta1)
# Compute the Train and Test Predictions
y_hat_1 = model_predictions(beta0, beta1, x2)
# Plot the Model Scatter
name = 'garage_area * living_area'
model = plotting_model(x2.values.reshape(-1, 1), target, y_hat_1, name)
# Compute the MSE
mse = mean_square_error(target, y_hat_1)
print('mean squared error:', mse)
print()
```
## Reasoning
Above, I modeled both living area and garage area by themselves then the dot product of Living Area * Garage Area to highlight the MSE of each vs. the MSE of the dot product. Garage Area, much more so than Living Area, has a high MSE indicating that on its own, Garage Area isn't the best predictor of a home's price; we must take the data in context with reality, and intuitively speaking, one wouldn't assume that the garage area, on its own, would be a feature indicative of price.
This fact combined with the assumption/implication that garage may scale with living area implies some correlation between the features, which would go against the linear assumption of feature independence. As a matter of fact, there is a correlation between them (Ln 24 & Out 24) of 0.44; however, this isn't problematic for two reasons:
1. 0.44 is quite low in regard to typical correlation standards.
2. Data must be seen in context.
#1.
Although I eliminated First Floor SqFt due, in part, to a high correlation and that correclation is only 0.09 points lower. The main reason why First Floor SqFt is eliminated is due to its inclusion within the living area vector. Additionally, the main reason why I'm including garage area is because it is not included with the living area vector.
#2.
Similar to my #1 explanation, knowing that garage area is 'bonus space' and, as such, is NOT included in a home's advertised square feet indicates that it isn't within the Living Area data set in the same way FF SqFt or Baths would be. It will most likely to scale with the living area independently of the living area making it a good fit for a MLR.
### 3. Model Interpretation
Now that you have calculated all the individual models in the dataset, provide an analytics rationale for which model has performed best. To provide some additional assessment metrics, let's create a function to compute the R-Squared.
#### Mathematically:
$$R^2 = \frac {SS_{Regression}}{SS_{Total}} = 1 - \frac {SS_{Error}}{SS_{Total}}$$<br>
where:<br>
$SS_{Regression} = \sum (\widehat {y_i} - \bar {y_i})^2$<br>
$SS_{Total} = \sum ({y_i} - \bar {y_i})^2$<br>
$SS_{Error} = \sum ({y_i} - \widehat {y_i})^2$
```
#ssr = sum of squares of regression --> variance of prediction from the mean
#sst = sum of squares total --> variance of the actuals from the prediction
#sse = sume of squares error --> variance of the atuals from the mean
def r_squared(y_outcome, predictions):
""" Compute the R Squared """
ssr = np.sum((predictions - np.mean(y_outcome))**2)
sst = np.sum((y_outcome - np.mean(y_outcome))**2)
sse = np.sum((y_outcome - predictions)**2)
# print(sse, "/", sst)
print("1 - SSE/SST =", round((1 - (sse/sst))*100), "%")
rss = (ssr/sst) * 100
return rss
```
Now that you we have R Squared calculated, evaluate the R Squared for the test group across all models and determine what model explains the data best.
```
rss = r_squared(target, y_hat_1)
print("R-Squared =", round(rss), "%")
count += 1
```
### R-Squared Adjusted
$R^2-adjusted = 1 - \frac {(1-R^2)(n-1)}{n-k-1}$
```
def r_squared_adjusted(rss, sample_size, regressors):
n = np.size(sample_size)
k = regressors
numerator = (1-rss)*(n)
denominator = n-k-1
rssAdj = 1 - (numerator / denominator)
return rssAdj
rssAdj = r_squared_adjusted(rss, y_hat_1, 2)
print(round(rssAdj), "%")
```
### 4. Model Diagnostics
Linear regressions depends on meetings assumption in the model. While we have not yet talked about the assumptions, you goal is to research and develop an intuitive understanding of why the assumptions make sense. We will walk through this portion on Multiple Linear Regression Project
|
github_jupyter
|
[SCEC BP3-QD](https://strike.scec.org/cvws/seas/download/SEAS_BP3.pdf) document is here.
# [DRAFT] Quasidynamic thrust fault earthquake cycles (plane strain)
## Summary
* Most of the code here follows almost exactly from [the previous section on strike-slip/antiplane earthquake cycles](c1qbx/part6_qd).
* Since the fault motion is in the same plane as the fault normal vectors, we are no longer operating in an antiplane approximation. Instead, we use plane strain elasticity, a different 2D reduction of full 3D elasticity.
* One key difference is the vector nature of the displacement and the tensor nature of the stress. We must always make sure we are dealing with tractions on the correct surface.
* We construct a mesh, build our discrete boundary integral operators, step through time and then compare against other benchmark participants' results.
Does this section need detailed explanation or is it best left as lonely code? Most of the explanation would be redundant with the antiplane QD document.
```
from tectosaur2.nb_config import setup
setup()
import sympy as sp
import numpy as np
import matplotlib.pyplot as plt
from tectosaur2 import gauss_rule, refine_surfaces, integrate_term, panelize_symbolic_surface
from tectosaur2.elastic2d import elastic_t, elastic_h
from tectosaur2.rate_state import MaterialProps, qd_equation, solve_friction, aging_law
surf_half_L = 1000000
fault_length = 40000
max_panel_length = 400
n_fault = 400
mu = shear_modulus = 3.2e10
nu = 0.25
quad_rule = gauss_rule(6)
sp_t = sp.var("t")
angle_rad = sp.pi / 6
sp_x = (sp_t + 1) / 2 * sp.cos(angle_rad) * fault_length
sp_y = -(sp_t + 1) / 2 * sp.sin(angle_rad) * fault_length
fault = panelize_symbolic_surface(
sp_t, sp_x, sp_y,
quad_rule,
n_panels=n_fault
)
free = refine_surfaces(
[
(sp_t, -sp_t * surf_half_L, 0 * sp_t) # free surface
],
quad_rule,
control_points = [
# nearfield surface panels and fault panels will be limited to 200m
# at 200m per panel, we have ~40m per solution node because the panels
# have 5 nodes each
(0, 0, 1.5 * fault_length, max_panel_length),
(0, 0, 0.2 * fault_length, 1.5 * fault_length / (n_fault)),
# farfield panels will be limited to 200000 m per panel at most
(0, 0, surf_half_L, 50000),
]
)
print(
f"The free surface mesh has {free.n_panels} panels with a total of {free.n_pts} points."
)
print(
f"The fault mesh has {fault.n_panels} panels with a total of {fault.n_pts} points."
)
plt.plot(free.pts[:,0]/1000, free.pts[:,1]/1000, 'k-o')
plt.plot(fault.pts[:,0]/1000, fault.pts[:,1]/1000, 'r-o')
plt.xlabel(r'$x ~ \mathrm{(km)}$')
plt.ylabel(r'$y ~ \mathrm{(km)}$')
plt.axis('scaled')
plt.xlim([-100, 100])
plt.ylim([-80, 20])
plt.show()
```
And, to start off the integration, we'll construct the operators necessary for solving for free surface displacement from fault slip.
```
singularities = np.array(
[
[-surf_half_L, 0],
[surf_half_L, 0],
[0, 0],
[float(sp_x.subs(sp_t,1)), float(sp_y.subs(sp_t,1))],
]
)
(free_disp_to_free_disp, fault_slip_to_free_disp), report = integrate_term(
elastic_t(nu), free.pts, free, fault, singularities=singularities, safety_mode=True, return_report=True
)
fault_slip_to_free_disp = fault_slip_to_free_disp.reshape((-1, 2 * fault.n_pts))
free_disp_to_free_disp = free_disp_to_free_disp.reshape((-1, 2 * free.n_pts))
free_disp_solve_mat = (
np.eye(free_disp_to_free_disp.shape[0]) + free_disp_to_free_disp
)
from tectosaur2.elastic2d import ElasticH
(free_disp_to_fault_stress, fault_slip_to_fault_stress), report = integrate_term(
ElasticH(nu, d_cutoff=8.0),
# elastic_h(nu),
fault.pts,
free,
fault,
tol=1e-12,
safety_mode=True,
singularities=singularities,
return_report=True,
)
fault_slip_to_fault_stress *= shear_modulus
free_disp_to_fault_stress *= shear_modulus
```
**We're not achieving the tolerance we asked for!!**
Hypersingular integrals can be tricky but I think this is solvable.
```
report['integration_error'].max()
A = -fault_slip_to_fault_stress.reshape((-1, 2 * fault.n_pts))
B = -free_disp_to_fault_stress.reshape((-1, 2 * free.n_pts))
C = fault_slip_to_free_disp
Dinv = np.linalg.inv(free_disp_solve_mat)
total_fault_slip_to_fault_stress = A - B.dot(Dinv.dot(C))
nx = fault.normals[:, 0]
ny = fault.normals[:, 1]
normal_mult = np.transpose(np.array([[nx, 0 * nx, ny], [0 * nx, ny, nx]]), (2, 0, 1))
total_fault_slip_to_fault_traction = np.sum(
total_fault_slip_to_fault_stress.reshape((-1, 3, fault.n_pts, 2))[:, None, :, :, :]
* normal_mult[:, :, :, None, None],
axis=2,
).reshape((-1, 2 * fault.n_pts))
```
## Rate and state friction
```
siay = 31556952 # seconds in a year
density = 2670 # rock density (kg/m^3)
cs = np.sqrt(shear_modulus / density) # Shear wave speed (m/s)
Vp = 1e-9 # Rate of plate motion
sigma_n0 = 50e6 # Normal stress (Pa)
# parameters describing "a", the coefficient of the direct velocity strengthening effect
a0 = 0.01
amax = 0.025
H = 15000
h = 3000
fx = fault.pts[:, 0]
fy = fault.pts[:, 1]
fd = -np.sqrt(fx ** 2 + fy ** 2)
a = np.where(
fd > -H, a0, np.where(fd > -(H + h), a0 + (amax - a0) * (fd + H) / -h, amax)
)
mp = MaterialProps(a=a, b=0.015, Dc=0.008, f0=0.6, V0=1e-6, eta=shear_modulus / (2 * cs))
plt.figure(figsize=(3, 5))
plt.plot(mp.a, fd/1000, label='a')
plt.plot(np.full(fy.shape[0], mp.b), fd/1000, label='b')
plt.xlim([0, 0.03])
plt.ylabel('depth')
plt.legend()
plt.show()
mesh_L = np.max(np.abs(np.diff(fd)))
Lb = shear_modulus * mp.Dc / (sigma_n0 * mp.b)
hstar = (np.pi * shear_modulus * mp.Dc) / (sigma_n0 * (mp.b - mp.a))
mesh_L, Lb, np.min(hstar[hstar > 0])
```
## Quasidynamic earthquake cycle derivatives
```
from scipy.optimize import fsolve
import copy
init_state_scalar = fsolve(lambda S: aging_law(mp, Vp, S), 0.7)[0]
mp_amax = copy.copy(mp)
mp_amax.a=amax
tau_amax = -qd_equation(mp_amax, sigma_n0, 0, Vp, init_state_scalar)
init_state = np.log((2*mp.V0/Vp)*np.sinh((tau_amax - mp.eta*Vp) / (mp.a*sigma_n0))) * mp.a
init_tau = np.full(fault.n_pts, tau_amax)
init_sigma = np.full(fault.n_pts, sigma_n0)
init_slip_deficit = np.zeros(fault.n_pts)
init_conditions = np.concatenate((init_slip_deficit, init_state))
class SystemState:
V_old = np.full(fault.n_pts, Vp)
state = None
def calc(self, t, y, verbose=False):
# Separate the slip_deficit and state sub components of the
# time integration state.
slip_deficit = y[: init_slip_deficit.shape[0]]
state = y[init_slip_deficit.shape[0] :]
# If the state values are bad, then the adaptive integrator probably
# took a bad step.
if np.any((state < 0) | (state > 2.0)):
print("bad state")
return False
# The big three lines solving for quasistatic shear stress, slip rate
# and state evolution
sd_vector = np.stack((slip_deficit * -ny, slip_deficit * nx), axis=1).ravel()
traction = total_fault_slip_to_fault_traction.dot(sd_vector).reshape((-1, 2))
delta_sigma_qs = np.sum(traction * np.stack((nx, ny), axis=1), axis=1)
delta_tau_qs = -np.sum(traction * np.stack((-ny, nx), axis=1), axis=1)
tau_qs = init_tau + delta_tau_qs
sigma_qs = init_sigma + delta_sigma_qs
V = solve_friction(mp, sigma_qs, tau_qs, self.V_old, state)
if not V[2]:
print("convergence failed")
return False
V=V[0]
if not np.all(np.isfinite(V)):
print("infinite V")
return False
dstatedt = aging_law(mp, V, state)
self.V_old = V
slip_deficit_rate = Vp - V
out = (
slip_deficit,
state,
delta_sigma_qs,
sigma_qs,
delta_tau_qs,
tau_qs,
V,
slip_deficit_rate,
dstatedt,
)
self.data = out
return self.data
def plot_system_state(t, SS, xlim=None):
"""This is just a helper function that creates some rough plots of the
current state to help with debugging"""
(
slip_deficit,
state,
delta_sigma_qs,
sigma_qs,
delta_tau_qs,
tau_qs,
V,
slip_deficit_rate,
dstatedt,
) = SS
slip = Vp * t - slip_deficit
fd = -np.linalg.norm(fault.pts, axis=1)
plt.figure(figsize=(15, 9))
plt.suptitle(f"t={t/siay}")
plt.subplot(3, 3, 1)
plt.title("slip")
plt.plot(fd, slip)
plt.xlim(xlim)
plt.subplot(3, 3, 2)
plt.title("slip deficit")
plt.plot(fd, slip_deficit)
plt.xlim(xlim)
# plt.subplot(3, 3, 2)
# plt.title("slip deficit rate")
# plt.plot(fd, slip_deficit_rate)
# plt.xlim(xlim)
# plt.subplot(3, 3, 2)
# plt.title("strength")
# plt.plot(fd, tau_qs/sigma_qs)
# plt.xlim(xlim)
plt.subplot(3, 3, 3)
# plt.title("log V")
# plt.plot(fd, np.log10(V))
plt.title("V")
plt.plot(fd, V)
plt.xlim(xlim)
plt.subplot(3, 3, 4)
plt.title(r"$\sigma_{qs}$")
plt.plot(fd, sigma_qs)
plt.xlim(xlim)
plt.subplot(3, 3, 5)
plt.title(r"$\tau_{qs}$")
plt.plot(fd, tau_qs, 'k-o')
plt.xlim(xlim)
plt.subplot(3, 3, 6)
plt.title("state")
plt.plot(fd, state)
plt.xlim(xlim)
plt.subplot(3, 3, 7)
plt.title(r"$\Delta\sigma_{qs}$")
plt.plot(fd, delta_sigma_qs)
plt.hlines([0], [fd[-1]], [fd[0]])
plt.xlim(xlim)
plt.subplot(3, 3, 8)
plt.title(r"$\Delta\tau_{qs}$")
plt.plot(fd, delta_tau_qs)
plt.hlines([0], [fd[-1]], [fd[0]])
plt.xlim(xlim)
plt.subplot(3, 3, 9)
plt.title("dstatedt")
plt.plot(fd, dstatedt)
plt.xlim(xlim)
plt.tight_layout()
plt.show()
def calc_derivatives(state, t, y):
"""
This helper function calculates the system state and then extracts the
relevant derivatives that the integrator needs. It also intentionally
returns infinite derivatives when the `y` vector provided by the integrator
is invalid.
"""
if not np.all(np.isfinite(y)):
return np.inf * y
state_vecs = state.calc(t, y)
if not state_vecs:
return np.inf * y
derivatives = np.concatenate((state_vecs[-2], state_vecs[-1]))
return derivatives
```
## Integrating through time
```
%%time
from scipy.integrate import RK23, RK45
# We use a 5th order adaptive Runge Kutta method and pass the derivative function to it
# the relative tolerance will be 1e-11 to make sure that even
state = SystemState()
derivs = lambda t, y: calc_derivatives(state, t, y)
integrator = RK45
atol = Vp * 1e-6
rtol = 1e-11
rk = integrator(derivs, 0, init_conditions, 1e50, atol=atol, rtol=rtol)
# Set the initial time step to one day.
rk.h_abs = 60 * 60 * 24
# Integrate for 1000 years.
max_T = 1000 * siay
n_steps = 500000
t_history = [0]
y_history = [init_conditions.copy()]
for i in range(n_steps):
# Take a time step and store the result
if rk.step() != None:
raise Exception("TIME STEPPING FAILED")
t_history.append(rk.t)
y_history.append(rk.y.copy())
# Print the time every 5000 steps
if i % 5000 == 0:
print(f"step={i}, time={rk.t / siay} yrs, step={(rk.t - t_history[-2]) / siay}")
if rk.t > max_T:
break
y_history = np.array(y_history)
t_history = np.array(t_history)
```
## Plotting the results
Now that we've solved for 1000 years of fault slip evolution, let's plot some of the results. I'll start with a super simple plot of the maximum log slip rate over time.
```
derivs_history = np.diff(y_history, axis=0) / np.diff(t_history)[:, None]
max_vel = np.max(np.abs(derivs_history), axis=1)
plt.plot(t_history[1:] / siay, np.log10(max_vel))
plt.xlabel('$t ~~ \mathrm{(yrs)}$')
plt.ylabel('$\log_{10}(V)$')
plt.show()
```
And next, we'll make the classic plot showing the spatial distribution of slip over time:
- the blue lines show interseismic slip evolution and are plotted every fifteen years
- the red lines show evolution during rupture every three seconds.
```
plt.figure(figsize=(10, 4))
last_plt_t = -1000
last_plt_slip = init_slip_deficit
event_times = []
for i in range(len(y_history) - 1):
y = y_history[i]
t = t_history[i]
slip_deficit = y[: init_slip_deficit.shape[0]]
should_plot = False
# Plot a red line every three second if the slip rate is over 0.1 mm/s.
if (
max_vel[i] >= 0.0001 and t - last_plt_t > 3
):
if len(event_times) == 0 or t - event_times[-1] > siay:
event_times.append(t)
should_plot = True
color = "r"
# Plot a blue line every fifteen years during the interseismic period
if t - last_plt_t > 15 * siay:
should_plot = True
color = "b"
if should_plot:
# Convert from slip deficit to slip:
slip = -slip_deficit + Vp * t
plt.plot(slip, fd / 1000.0, color + "-", linewidth=0.5)
last_plt_t = t
last_plt_slip = slip
plt.xlim([0, np.max(last_plt_slip)])
plt.ylim([-40, 0])
plt.ylabel(r"$\textrm{z (km)}$")
plt.xlabel(r"$\textrm{slip (m)}$")
plt.tight_layout()
plt.savefig("halfspace.png", dpi=300)
plt.show()
```
And a plot of recurrence interval:
```
plt.title("Recurrence interval")
plt.plot(np.diff(event_times) / siay, "k-*")
plt.xticks(np.arange(0, 10, 1))
plt.yticks(np.arange(75, 80, 0.5))
plt.xlabel("Event number")
plt.ylabel("Time between events (yr)")
plt.show()
```
## Comparison against SCEC SEAS results
```
ozawa_data = np.loadtxt("ozawa7500.txt")
ozawa_slip_rate = 10 ** ozawa_data[:, 2]
ozawa_stress = ozawa_data[:, 3]
t_start_idx = np.argmax(max_vel > 1e-4)
t_end_idx = np.argmax(max_vel[t_start_idx:] < 1e-6)
n_steps = t_end_idx - t_start_idx
t_chunk = t_history[t_start_idx : t_end_idx]
shear_chunk = []
slip_rate_chunk = []
for i in range(n_steps):
system_state = SystemState().calc(t_history[t_start_idx + i], y_history[t_start_idx + i])
slip_deficit, state, delta_sigma_qs, sigma_qs, delta_tau_qs, tau_qs, V, slip_deficit_rate, dstatedt = system_state
shear_chunk.append((tau_qs - mp.eta * V))
slip_rate_chunk.append(V)
shear_chunk = np.array(shear_chunk)
slip_rate_chunk = np.array(slip_rate_chunk)
fault_idx = np.argmax((-7450 > fd) & (fd > -7550))
VAvg = np.mean(slip_rate_chunk[:, fault_idx:(fault_idx+2)], axis=1)
SAvg = np.mean(shear_chunk[:, fault_idx:(fault_idx+2)], axis=1)
fault_idx
t_align = t_chunk[np.argmax(VAvg > 0.2)]
ozawa_t_align = np.argmax(ozawa_slip_rate > 0.2)
for lims in [(-1, 1), (-15, 30)]:
plt.figure(figsize=(12, 8))
plt.subplot(2, 1, 1)
plt.plot(t_chunk - t_align, SAvg / 1e6, "k-o", markersize=0.5, linewidth=0.5, label='here')
plt.plot(
ozawa_data[:, 0] - ozawa_data[ozawa_t_align, 0],
ozawa_stress,
"b-*",
markersize=0.5,
linewidth=0.5,
label='ozawa'
)
plt.legend()
plt.xlim(lims)
plt.xlabel("Time (s)")
plt.ylabel("Shear Stress (MPa)")
# plt.show()
plt.subplot(2, 1, 2)
plt.plot(t_chunk - t_align, VAvg, "k-o", markersize=0.5, linewidth=0.5, label='here')
plt.plot(
ozawa_data[:, 0] - ozawa_data[ozawa_t_align, 0],
ozawa_slip_rate[:],
"b-*",
markersize=0.5,
linewidth=0.5,
label='ozawa'
)
plt.legend()
plt.xlim(lims)
plt.xlabel("Time (s)")
plt.ylabel("Slip rate (m/s)")
plt.tight_layout()
plt.show()
```
|
github_jupyter
|
# 머신 러닝 교과서 3판
# 9장 - 웹 애플리케이션에 머신 러닝 모델 내장하기
**아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.**
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://nbviewer.org/github/rickiepark/python-machine-learning-book-3rd-edition/blob/master/ch09/ch09.ipynb"><img src="https://jupyter.org/assets/share.png" width="60" />주피터 노트북 뷰어로 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/python-machine-learning-book-3rd-edition/blob/master/ch09/ch09.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
</table>
### 목차
- 8장 정리 - 영화 리뷰 분류를 위한 모델 훈련하기
- 학습된 사이킷런 추정기 저장
- 데이터를 저장하기 위해 SQLite 데이터베이스 설정
- 플라스크 웹 애플리케이션 개발
- 첫 번째 플라스크 애플리케이션
- 폼 검증과 화면 출력
- 영화 리뷰 분류기를 웹 애플리케이션으로 만들기
- 공개 서버에 웹 애플리케이션 배포
- 영화 분류기 업데이트
- 요약
```
# 코랩에서 실행할 경우 최신 버전의 사이킷런을 설치합니다.
!pip install --upgrade scikit-learn
from IPython.display import Image
```
플래스크(Flask) 웹 애플리케이션 코드는 다음 디렉토리에 있습니다:
- `1st_flask_app_1/`: 간단한 플래스크 웹 애플리케이션
- `1st_flask_app_2/`: `1st_flask_app_1`에 폼 검증과 렌더링을 추가하여 확장한 버전
- `movieclassifier/`: 웹 애플리케이션에 내장한 영화 리뷰 분류기
- `movieclassifier_with_update/`: `movieclassifier`와 같지만 초기화를 위해 sqlite 데이터베이스를 사용합니다.
웹 애플리케이션을 로컬에서 실행하려면 `cd`로 (위에 나열된) 각 디렉토리에 들어가서 메인 애플리케이션 스크립트를 실행합니다.
cd ./1st_flask_app_1
python app.py
터미널에서 다음같은 내용일 출력됩니다.
* Running on http://127.0.0.1:5000/
* Restarting with reloader
웹 브라우저를 열고 터미널에 출력된 주소(일반적으로 http://127.0.0.1:5000/)를 입력하여 웹 애플리케이션에 접속합니다.
**이 튜토리얼로 만든 예제 애플리케이션 데모는 다음 주소에서 볼 수 있습니다: http://haesun.pythonanywhere.com/**.
<br>
<br>
# 8장 정리 - 영화 리뷰 분류를 위한 모델 훈련하기
이 절은 8장의 마지막 섹션에서 훈련한 로지스틱 회귀 모델을 다시 사용합니다. 이어지는 코드 블럭을 실행하여 다음 절에서 사용할 모델을 훈련시키겠습니다.
**노트**
다음 코드는 8장에서 만든 `movie_data.csv` 데이터셋을 사용합니다.
**코랩을 사용할 때는 다음 셀을 실행하세요.**
```
!wget https://github.com/rickiepark/python-machine-learning-book-3rd-edition/raw/master/ch09/movie_data.csv.gz
import gzip
with gzip.open('movie_data.csv.gz') as f_in, open('movie_data.csv', 'wb') as f_out:
f_out.writelines(f_in)
import nltk
nltk.download('stopwords')
import numpy as np
import re
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
stop = stopwords.words('english')
porter = PorterStemmer()
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
```
`pyprind`는 주피터 노트북에서 진행바를 출력하기 위한 유틸리티입니다. `pyprind` 패키지를 설치하려면 다음 셀을 실행하세요.
```
!pip install pyprind
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('정확도: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
```
### 노트
pickle 파일을 만드는 것이 조금 까다로울 수 있기 때문에 `pickle-test-scripts/` 디렉토리에 올바르게 환경이 설정되었는지 확인하는 간단한 테스트 스크립트를 추가했습니다. 기본적으로 `movie_data` 데이터 일부를 포함하고 있고 `ch08`의 관련된 코드를 정리한 버전입니다.
다음처럼 실행하면
python pickle-dump-test.py
`movie_data_small.csv`에서 작은 분류 모델을 훈련하고 2개의 pickle 파일을 만듭니다.
stopwords.pkl
classifier.pkl
그다음 아래 명령을 실행하면
python pickle-load-test.py
다음 2줄이 출력되어야 합니다:
Prediction: positive
Probability: 85.71%
<br>
<br>
# 학습된 사이킷런 추정기 저장
앞에서 로지스틱 회귀 모델을 훈련한 후에 분류기, 불용어, 포터 어간 추출기, `HashingVectorizer`를 로컬 디스크에 직렬화된 객체로 저장합니다. 나중에 웹 애플리케이션에서 학습된 분류기를 이용하겠습니다.
```
import pickle
import os
dest = os.path.join('movieclassifier', 'pkl_objects')
if not os.path.exists(dest):
os.makedirs(dest)
pickle.dump(stop, open(os.path.join(dest, 'stopwords.pkl'), 'wb'), protocol=4)
pickle.dump(clf, open(os.path.join(dest, 'classifier.pkl'), 'wb'), protocol=4)
```
그다음 나중에 임포트할 수 있도록 별도의 파일에 `HashingVectorizer`를 저장합니다.
```
%%writefile movieclassifier/vectorizer.py
from sklearn.feature_extraction.text import HashingVectorizer
import re
import os
import pickle
cur_dir = os.path.dirname(__file__)
stop = pickle.load(open(
os.path.join(cur_dir,
'pkl_objects',
'stopwords.pkl'), 'rb'))
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text.lower())
text = re.sub('[\W]+', ' ', text.lower()) \
+ ' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
```
이전 코드 셀을 실행한 후에 객체가 올바르게 저장되었는지 확인하기 위해 IPython 노트북 커널을 재시작할 수 있습니다.
먼저 현재 파이썬 디렉토리를 `movieclassifer`로 변경합니다:
```
import os
os.chdir('movieclassifier')
import pickle
import re
import os
from vectorizer import vect
clf = pickle.load(open(os.path.join('pkl_objects', 'classifier.pkl'), 'rb'))
import numpy as np
label = {0:'음성', 1:'양성'}
example = ["I love this movie. It's amazing."]
X = vect.transform(example)
print('예측: %s\n확률: %.2f%%' %\
(label[clf.predict(X)[0]],
np.max(clf.predict_proba(X))*100))
```
<br>
<br>
# 데이터를 저장하기 위해 SQLite 데이터베이스 설정
이 코드를 실행하기 전에 현재 위치가 `movieclassifier` 디렉토리인지 확인합니다.
```
os.getcwd()
import sqlite3
import os
conn = sqlite3.connect('reviews.sqlite')
c = conn.cursor()
c.execute('DROP TABLE IF EXISTS review_db')
c.execute('CREATE TABLE review_db (review TEXT, sentiment INTEGER, date TEXT)')
example1 = 'I love this movie'
c.execute("INSERT INTO review_db (review, sentiment, date) VALUES (?, ?, DATETIME('now'))", (example1, 1))
example2 = 'I disliked this movie'
c.execute("INSERT INTO review_db (review, sentiment, date) VALUES (?, ?, DATETIME('now'))", (example2, 0))
conn.commit()
conn.close()
conn = sqlite3.connect('reviews.sqlite')
c = conn.cursor()
c.execute("SELECT * FROM review_db WHERE date BETWEEN '2017-01-01 10:10:10' AND DATETIME('now')")
results = c.fetchall()
conn.close()
print(results)
Image(url='https://git.io/Jts3V', width=700)
```
<br>
# 플라스크 웹 애플리케이션 개발
...
## 첫 번째 플라스크 애플리케이션
...
```
Image(url='https://git.io/Jts3o', width=700)
```
## 폼 검증과 화면 출력
```
Image(url='https://git.io/Jts3K', width=400)
Image(url='https://git.io/Jts36', width=400)
```
<br>
<br>
## 화면 요약
```
Image(url='https://git.io/Jts3P', width=800)
Image(url='https://git.io/Jts3X', width=800)
Image(url='https://git.io/Jts31', width=400)
```
# 영화 리뷰 분류기를 웹 애플리케이션으로 만들기
```
Image(url='https://git.io/Jts3M', width=400)
Image(url='https://git.io/Jts3D', width=400)
Image(url='https://git.io/Jts3y', width=400)
Image(url='https://git.io/Jts3S', width=200)
Image(url='https://git.io/Jts32', width=400)
```
<br>
<br>
# 공개 서버에 웹 애플리케이션 배포
```
Image(url='https://git.io/Jts39', width=600)
```
<br>
<br>
## 영화 분류기 업데이트
다운로드한 깃허브 저장소에 들어있는 movieclassifier_with_update 디렉토리를 사용합니다(그렇지 않으면 `movieclassifier` 디렉토리를 복사해서 사용하세요).
**코랩을 사용할 때는 다음 셀을 실행하세요.**
```
!cp -r ../movieclassifier ../movieclassifier_with_update
import shutil
os.chdir('..')
if not os.path.exists('movieclassifier_with_update'):
os.mkdir('movieclassifier_with_update')
os.chdir('movieclassifier_with_update')
if not os.path.exists('pkl_objects'):
os.mkdir('pkl_objects')
shutil.copyfile('../movieclassifier/pkl_objects/classifier.pkl',
'./pkl_objects/classifier.pkl')
shutil.copyfile('../movieclassifier/reviews.sqlite',
'./reviews.sqlite')
```
SQLite 데이터베이스에 저장된 데이터로 분류기를 업데이트하는 함수를 정의합니다:
```
import pickle
import sqlite3
import numpy as np
# 로컬 디렉토리에서 HashingVectorizer를 임포트합니다
from vectorizer import vect
def update_model(db_path, model, batch_size=10000):
conn = sqlite3.connect(db_path)
c = conn.cursor()
c.execute('SELECT * from review_db')
results = c.fetchmany(batch_size)
while results:
data = np.array(results)
X = data[:, 0]
y = data[:, 1].astype(int)
classes = np.array([0, 1])
X_train = vect.transform(X)
clf.partial_fit(X_train, y, classes=classes)
results = c.fetchmany(batch_size)
conn.close()
return None
```
모델을 업데이트합니다:
```
cur_dir = '.'
# app.py 파일에 이 코드를 삽입했다면 다음 경로를 사용하세요.
# import os
# cur_dir = os.path.dirname(__file__)
clf = pickle.load(open(os.path.join(cur_dir,
'pkl_objects',
'classifier.pkl'), 'rb'))
db = os.path.join(cur_dir, 'reviews.sqlite')
update_model(db_path=db, model=clf, batch_size=10000)
# classifier.pkl 파일을 업데이트하려면 다음 주석을 해제하세요.
# pickle.dump(clf, open(os.path.join(cur_dir,
# 'pkl_objects', 'classifier.pkl'), 'wb')
# , protocol=4)
```
<br>
<br>
|
github_jupyter
|
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv('Data/20200403-WHO.csv')
df
df = df[df['Country/Territory'] != 'conveyance (Diamond']
death_rate = df['Total Deaths']/df['Total Confirmed']*100
df['Death Rate'] = death_rate
df
countries_infected = len(df)
print('The total number of countries infected is:',countries_infected)
df = df.sort_values(by=['Death Rate'],ascending=False)
df[0:30]
minimum_number_cases = 1000 #define the minimum number of cases here/defina o número mínimo de casos aqui
dfMinNumCases = df[df['Total Confirmed'] > minimum_number_cases]
dfMinNumCases = dfMinNumCases.reset_index(drop=True)
dfMinNumCases.index = np.arange(1, (len(dfMinNumCases)+1))
dfMinNumCases[0:30]
#matplotlib defaults
sns.set(style="whitegrid")
top15_deathrate = dfMinNumCases[0:15]
death_rate = top15_deathrate.round({'Death Rate':2})
death_rate = death_rate['Death Rate']
plt.figure(figsize=(15,10))
plt.barh(top15_deathrate['Country/Territory'],top15_deathrate['Death Rate'],height=0.7, color='red')
plt.title('Death Rate per Country (03/04/2020)',fontsize=25)
plt.xlabel('Death Rate [%]',fontsize=18)
plt.ylabel('Country',fontsize=18)
plt.gca().invert_yaxis()
for i in range (0,15):
plt.text(x=death_rate.iloc[i]+0.4, y=i , s=death_rate.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=14)
plt.show()
#seaborn defaults
f, ax = plt.subplots(figsize=(15, 12))
sns.set(style="whitegrid")
sns.set_color_codes("muted")
sns.barplot(x='Death Rate',y='Country/Territory', data=top15_deathrate ,
label="Deaths", color="red")
plt.title('Death Rate per Country (03/04/2020)',fontsize=25)
plt.xlabel('Death Rate [%]',fontsize=18)
plt.ylabel('Country',fontsize=18)
for i in range (0,15):
plt.text(x=death_rate.iloc[i]+0.4, y=i , s=death_rate.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=16)
plt.savefig('Graphs/20200403_DeathRatePerCountry.png', bbox_inches='tight')
plt.show()
#matplotlib defaults
top15_confirmed = top15_deathrate.sort_values(by=['Total Confirmed'],ascending=False)
countries = np.array(top15_confirmed['Country/Territory'])
confirmed = np.array(top15_confirmed['Total Confirmed'])
deaths = np.array(top15_confirmed['Total Deaths'])
diference = confirmed - deaths
plt.figure(figsize=(15,10))
p1 = plt.barh(countries,deaths, color='red')
p2 = plt.barh(countries,diference,left=deaths, color='yellow')
plt.title('Total Number of Cases/Deaths (03/04/2020)',fontsize=25)
plt.xlabel('Cases/Deaths',fontsize=18)
plt.ylabel('Country',fontsize=18)
plt.legend((p1[0], p2[0]), ('Deaths', 'Confirmed'), loc='lower right')
plt.gca().invert_yaxis()
for i in range (0,15):
plt.text(x=deaths[i]+1900, y=i , s=deaths[i],horizontalalignment='center',verticalalignment='center', color='red',fontsize=14)
plt.text(x=confirmed[i]+4000, y=i , s=confirmed[i],horizontalalignment='center',verticalalignment='center', fontsize=14)
plt.show()
#seaborn defaults
sns.set(style="whitegrid")
f, ax = plt.subplots(figsize=(15, 6))
sns.set_color_codes("pastel")
sns.barplot(x='Total Confirmed',y='Country/Territory', data=top15_confirmed,
label="Confirmed", color="yellow")
sns.set_color_codes("muted")
sns.barplot(x='Total Deaths',y='Country/Territory', data=top15_confirmed ,
label="Deaths", color="red")
plt.title('Total Number of Cases/Deaths in the Top15 Death Rate Countries (03/04/2020)',fontsize=18)
ax.legend(ncol=2, loc="lower right", frameon=True)
ax.set(ylabel="Countries/Territory",
xlabel="Cases/Deaths")
for i in range (0,15):
plt.text(x=deaths[i]+1900, y=i , s=deaths[i],horizontalalignment='center',verticalalignment='center', color='red',fontsize=14)
plt.text(x=confirmed[i]+4000, y=i , s=confirmed[i],horizontalalignment='center',verticalalignment='center', fontsize=14)
sns.despine(left=True, bottom=True)
plt.savefig('Graphs/20200403_TotalNumberCasesDeaths.png', bbox_inches='tight')
dfDSLRC = df.sort_values(by=['Days since last reported case'],ascending=False)#dfDSLRC = dataframe Days since last reported case
dfDSLRC[0:30]
#seaborn defaults
top15DSLRC = dfDSLRC[0:15].sort_values(by=['Days since last reported case'])
DSLRC = top15DSLRC['Days since last reported case']
f, ax = plt.subplots(figsize=(15, 12))
sns.set(style="whitegrid")
sns.set_color_codes("muted")
sns.barplot(x='Country/Territory',y='Days since last reported case', data=top15DSLRC ,
label="Days since last reported case", color="blue")
plt.title('Days since Last Reported Case per Country (03/04/2020)',fontsize=24)
plt.ylabel('Days since last reported case',fontsize=18)
plt.xlabel('Countries/Territory',fontsize=18)
plt.xticks(rotation='vertical')
for i in range (0,15):
plt.text(x=i, y=DSLRC.iloc[i]+0.4 , s=DSLRC.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=16)
plt.savefig('Graphs/20200403_DaysSinceLast.png', bbox_inches='tight')
plt.show()
#seaborn defaults
confirmedDSLRC = np.array(top15DSLRC['Total Confirmed'])
deathsDSLRC = np.array(top15DSLRC['Total Deaths'])
sns.set(style="whitegrid")
f, ax = plt.subplots(figsize=(15, 6))
sns.set_color_codes("pastel")
sns.barplot(x='Total Confirmed',y='Country/Territory', data=top15DSLRC,
label="Confirmed", color="yellow")
sns.set_color_codes("muted")
sns.barplot(x='Total Deaths',y='Country/Territory', data=top15DSLRC ,
label="Deaths", color="red")
plt.title('Total Number of Cases/Deaths in the Top15 Days Since Last Reported Case Countries (03/04/2020)',fontsize=18)
ax.legend(ncol=2, loc="lower right", frameon=True)
ax.set(ylabel="Countries/Territory",
xlabel="Cases/Deaths")
for i in range (0,15):
plt.text(x=deathsDSLRC[i]+0.2, y=i , s=deathsDSLRC[i],horizontalalignment='center',verticalalignment='center', color='red',fontsize=14)
plt.text(x=confirmedDSLRC[i]+0.4, y=i , s=confirmedDSLRC[i],horizontalalignment='center',verticalalignment='center', fontsize=14)
sns.despine(left=True, bottom=True)
plt.savefig('Graphs/20200403_TotalNumberCasesDeathsDSLRC.png', bbox_inches='tight')
Transmission_type = pd.get_dummies(df, columns=['Transmission Classification'])
Transmission_type
print('The number of countries with only imported cases is:',Transmission_type['Transmission Classification_Imported cases only'].sum())
print('The number of countries with local transmissions cases is:',Transmission_type['Transmission Classification_Local transmission'].sum())
print('The number of countries under investigation to determine the type of transmission is:',Transmission_type['Transmission Classification_Under investigation'].sum())
WorldPopulation = pd.read_csv('Data/WorldPopulation.csv')
df['Population'] = 0
for i in range (0,len(df)):
pop = WorldPopulation.loc[WorldPopulation.loc[:,'Country/Territory']==df.loc[i,'Country/Territory']]
if pop.empty == True:
df.loc[i,'Population'] = 0
else:
df.loc[i,'Population'] = pop.iloc[0,1]
for i in range (0,len(df)):
if df.loc[i,'Population'] != 0:
df.loc[i,'Population Contaminated %'] = df.loc[i,'Total Confirmed']/df.loc[i,'Population']*100
else:
df.loc[i,'Population Contaminated %'] = 0
dfPopContaminated = df.sort_values(by=['Population Contaminated %'],ascending=False)
minimum_number_cases = 1 #define the minimum number of cases here/defina o número mínimo de casos aqui
dfPopMinNumCases = dfPopContaminated[dfPopContaminated['Total Confirmed'] > minimum_number_cases]
dfPopMinNumCases = dfPopMinNumCases.reset_index(drop=True)
dfPopMinNumCases.index = np.arange(1, (len(dfPopMinNumCases)+1))
top15_contaminated = dfPopMinNumCases[0:15]
contamination_rate = top15_contaminated.round({'Population Contaminated %':4})
contamination_rate = contamination_rate['Population Contaminated %']
#seaborn defaults
f, ax = plt.subplots(figsize=(15, 12))
sns.set(style="whitegrid")
sns.set_color_codes("muted")
sns.barplot(x='Population Contaminated %',y='Country/Territory', data=top15_contaminated ,
label="Deaths", color="navy")
plt.title('Cases Confirmed per Number of Habitants per Country (03/04/2020)',fontsize=25)
plt.xlabel('Cases Confirmed per Number of Habitants per Country [%]',fontsize=18)
plt.ylabel('Country',fontsize=18)
for i in range (0,15):
plt.text(x=contamination_rate.iloc[i]+0.03, y=i , s=contamination_rate.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=16)
plt.savefig('Graphs/20200403_ContaminationPerCountry.png', bbox_inches='tight')
plt.show()
minimum_number_cases = 1000 #define the minimum number of cases here/defina o número mínimo de casos aqui
dfPopMinNumCases = dfPopContaminated[dfPopContaminated['Total Confirmed'] > minimum_number_cases]
dfPopMinNumCases = dfPopMinNumCases.reset_index(drop=True)
dfPopMinNumCases.index = np.arange(1, (len(dfPopMinNumCases)+1))
top15_contaminated = dfPopMinNumCases[0:15]
contamination_rate = top15_contaminated.round({'Population Contaminated %':4})
contamination_rate = contamination_rate['Population Contaminated %']
#seaborn defaults
f, ax = plt.subplots(figsize=(15, 12))
sns.set(style="whitegrid")
sns.set_color_codes("muted")
sns.barplot(x='Population Contaminated %',y='Country/Territory', data=top15_contaminated ,
label="Deaths", color="navy")
plt.title('Cases Confirmed per Number of Habitants per Country (03/04/2020)',fontsize=25)
plt.xlabel('Cases Confirmed per Number of Habitants per Country [%]',fontsize=18)
plt.ylabel('Country',fontsize=18)
for i in range (0,15):
plt.text(x=contamination_rate.iloc[i]+0.03, y=i , s=contamination_rate.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=16)
plt.savefig('Graphs/20200403_ContaminationPerCountry1kCases.png', bbox_inches='tight')
plt.show()
for i in range (0,len(df)):
if df.loc[i,'Population'] != 0:
df.loc[i,'Population Death Rate %'] = df.loc[i,'Total Deaths']/df.loc[i,'Population']*100
else:
df.loc[i,'Population Death Rate %'] = 0
dfPopDeathRate = df.sort_values(by=['Population Death Rate %'],ascending=False)
minimum_number_cases = 1 #define the minimum number of cases here/defina o número mínimo de casos aqui
dfPopMinNumCases = dfPopDeathRate[dfPopDeathRate['Total Confirmed'] > minimum_number_cases]
dfPopMinNumCases = dfPopMinNumCases.reset_index(drop=True)
dfPopMinNumCases.index = np.arange(1, (len(dfPopMinNumCases)+1))
top15_PopDeathRate = dfPopMinNumCases[0:15]
popDeath_rate = top15_PopDeathRate.round({'Population Death Rate %':4})
popDeath_rate = popDeath_rate['Population Death Rate %']
#seaborn defaults
f, ax = plt.subplots(figsize=(15, 12))
sns.set(style="whitegrid")
sns.set_color_codes("muted")
sns.barplot(x='Population Death Rate %',y='Country/Territory', data=top15_PopDeathRate ,
label="Deaths", color="navy")
plt.title('Death rate per Number of Habitants per Country (03/04/2020)',fontsize=25)
plt.xlabel('Death rate per Number of Habitants per Country [%]',fontsize=18)
plt.ylabel('Country',fontsize=18)
for i in range (0,15):
plt.text(x=popDeath_rate.iloc[i]+0.003, y=i , s=popDeath_rate.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=16)
plt.savefig('Graphs/20200403_DeathRateinPopPerCountryCases.png', bbox_inches='tight')
plt.show()
minimum_number_cases = 1000 #define the minimum number of cases here/defina o número mínimo de casos aqui
dfPopMinNumCases = dfPopDeathRate[dfPopDeathRate['Total Confirmed'] > minimum_number_cases]
dfPopMinNumCases = dfPopMinNumCases.reset_index(drop=True)
dfPopMinNumCases.index = np.arange(1, (len(dfPopMinNumCases)+1))
top15_PopDeathRate = dfPopMinNumCases[0:15]
popDeath_rate = top15_PopDeathRate.round({'Population Death Rate %':4})
popDeath_rate = popDeath_rate['Population Death Rate %']
#seaborn defaults
f, ax = plt.subplots(figsize=(15, 12))
sns.set(style="whitegrid")
sns.set_color_codes("muted")
sns.barplot(x='Population Death Rate %',y='Country/Territory', data=top15_PopDeathRate ,
label="Deaths", color="navy")
plt.title('Death rate per Number of Habitants per Country (03/04/2020)',fontsize=25)
plt.xlabel('Death rate per Number of Habitants per Country [%]',fontsize=18)
plt.ylabel('Country',fontsize=18)
for i in range (0,15):
plt.text(x=popDeath_rate.iloc[i]+0.001, y=i , s=popDeath_rate.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=16)
plt.savefig('Graphs/20200403_DeathRateinPopPerCountry1kCases.png', bbox_inches='tight')
plt.show()
#seaborn defaults
confirmedPop = np.array(top15_PopDeathRate['Total Confirmed'])
deathsPop = np.array(top15_PopDeathRate['Total Deaths'])
sns.set(style="whitegrid")
f, ax = plt.subplots(figsize=(15, 6))
sns.set_color_codes("pastel")
sns.barplot(x='Total Confirmed',y='Country/Territory', data=top15_PopDeathRate,
label="Confirmed", color="yellow")
sns.set_color_codes("muted")
sns.barplot(x='Total Deaths',y='Country/Territory', data=top15_PopDeathRate ,
label="Deaths", color="red")
plt.title('Total Number of Cases/Deaths in the Top15 Death Rate per Number of Habitants Countries (03/04/2020)',fontsize=18)
ax.legend(ncol=2, loc="upper right", frameon=True)
ax.set(ylabel="Countries/Territory",
xlabel="Cases/Deaths")
for i in range (0,15):
plt.text(x=deathsPop[i]+2500, y=i , s=deathsPop[i],horizontalalignment='center',verticalalignment='center', color='blue',fontsize=14)
plt.text(x=confirmedPop[i]+10000, y=i , s=confirmedPop[i],horizontalalignment='center',verticalalignment='center', fontsize=14)
sns.despine(left=True, bottom=True)
plt.savefig('Graphs/20200403_TotalNumberCasesDeathsPop.png', bbox_inches='tight')
```
|
github_jupyter
|
#Gaussian bayes classifier
In this assignment we will use a Gaussian bayes classfier to classify our data points.
# Import packages
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
from sklearn.metrics import classification_report
from matplotlib import cm
```
# Load training data
Our data has 2D feature $x1, x2$. Data from the two classes is are in $\texttt{class1_train}$ and $\texttt{class2_train}$ respectively. Each file has two columns corresponding to the 2D feature.
```
class1_train = pd.read_csv('https://raw.githubusercontent.com/shala2020/shala2020.github.io/master/Lecture_Materials/Assignments/MachineLearning/L3/class1_train').to_numpy()
class2_train = pd.read_csv('https://raw.githubusercontent.com/shala2020/shala2020.github.io/master/Lecture_Materials/Assignments/MachineLearning/L3/class2_train').to_numpy()
```
# Visualize training data
Generate 2D scatter plot of the training data. Plot the points from class 1 in red and the points from class 2 in blue.
```
plt.figure(figsize=(10,10))
plt.scatter(class1_train[:,0], class1_train[:,1], color = 'red', label = 'Class 1')
plt.scatter(class2_train[:,0], class2_train[:,1], color = 'blue', label = 'Class 2')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc = 'best')
plt.show()
```
# Maximum likelihood estimate of parameters
We will model the likelihood, $P(\mathbf{x}|C_1)$ and $P(\mathbf{x}|C_2)$ as $\mathcal{N}(\mathbf{\mu_1},|\Sigma_1)$ and $\mathcal{N}(\mathbf{\mu_2},|\Sigma_2)$ respectively. The prior probability of the classes are called, $P(C_1)=\pi_1$ and $P(C_2)=\pi_2$.
The maximum likelihood estimate of the parameters as follows:
\begin{align*}
\pi_k &= \frac{\sum_{i=1}^N \mathbb{1}(t^i=k)}{N}\\
\mathbf{\mu_k} &= \frac{\sum_{i=1}^N \mathbb{1}(t^i=k)\mathbf{x}^i}{\sum_{i=1}^N \mathbb{1}(t^i=k)}\\
\Sigma_k &= \frac{\sum_{i=1}^N \mathbb{1}(t^i=k)(\mathbf{x}^i-\mathbf{\mu_k})(\mathbf{x}^i-\mathbf{\mu_k})^T}{\sum_{i=1}^N \mathbb{1}(t^i=k)}\\
\end{align*}
Here, $t^i$ is the target or class of $i^{th}$ sample. $\mathbb{1}(t^i=k)$ is 1 if $t^i=k$ and 0 otherwise.
Compute maximum likelihood values estimates of $\pi_1$, $\mu_1$, $\Sigma_1$ and $\pi_2$, $\mu_2$, $\Sigma_2$
Also print these values
```
n1, n2 = class1_train.shape[0], class2_train.shape[0]
pi1, pi2 = n1/(n1+n2), n2/(n1+n2)
mu1 = np.mean(class1_train, axis = 0)
mu2 = np.mean(class2_train, axis = 0)
# ------------------ sigma -------------------- #
XT = (class1_train-mu1).reshape(n1,1,2)
X = (class1_train-mu1).reshape(n1,2,1)
sigma1 = np.matmul(X,XT).mean(axis = 0)
XT = (class2_train-mu2).reshape(n2,1,2)
X = (class2_train-mu2).reshape(n2,2,1)
sigma2 = np.matmul(X,XT).mean(axis = 0)
print(' pi1 = {}\n mu1 = {}\n sigma1 = \n{}\n'.format(pi1, mu1, sigma1))
print(' pi2 = {}\n mu2 = {}\n sigma2 = \n{}\n'.format(pi2, mu2, sigma2))
```
# Alternate approach
```
sigma1 = np.cov((class1_train-mu1).T, bias='True')
sigma2 = np.cov((class2_train-mu2).T, bias='True')
print(sigma1)
print(sigma2)
```
# Another alternate
```
XT = (class1_train-mu1).T
X = (class1_train-mu1)
sigma1 = np.matmul(XT,X)/n1
XT = (class2_train-mu2).T
X = (class2_train-mu2)
sigma2 = np.matmul(XT,X)/n2
print(sigma1)
print(sigma2)
```
# Visualize the likelihood
Now that you have the parameters, let us visualize how the likelihood looks like.
1. Use $\texttt{np.mgrid}$ to generate points uniformly spaced in -5 to 5 along 2 axes
1. Use $\texttt{multivariate_normal.pdf}$ to get compute the Gaussian likelihood for each class
1. Use $\texttt{plot_surface}$ to plot the likelihood of each class.
1. Use $\texttt{contourf}$ to plot the likelihood of each class.
For the plots, use $\texttt{cmap=cm.Reds}$ for class 1 and $\texttt{cmap=cm.Blues}$ for class 2. Use $\texttt{alpha=0.5}$ to overlay both plots together.
```
x, y = np.mgrid[-5:5:.01, -5:5:.01]
pos = np.empty(x.shape + (2,))
pos[:, :, 0] = x; pos[:, :, 1] = y
rv1 = multivariate_normal(mean = mu1, cov = sigma1)
rv2 = multivariate_normal(mean = mu2, cov = sigma2)
# plt.plot(x,y,likelihood1.pdf(pos), coo = 'red')
likelihood1 = rv1.pdf(pos)
likelihood2 = rv2.pdf(pos)
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(121, projection='3d')
plt.title('Likelihood')
ax.plot_surface(x,y,likelihood1, cmap=cm.Reds, alpha = 0.5)
ax.plot_surface(x,y,likelihood2, cmap=cm.Blues, alpha = 0.5)
plt.xlabel('x1')
plt.ylabel('x2')
plt.subplot(122)
plt.title('Contour plot of likelihood')
plt.contourf(x, y, likelihood1, cmap=cm.Reds, alpha = 0.5)
plt.contourf(x, y, likelihood2, cmap=cm.Blues, alpha = 0.5)
plt.xlabel('x1')
plt.ylabel('x2')
```
#Visualize the posterior
Use the prior and the likelihood you've computed to obtain the posterior distribution for each class.
Like in the case of the likelihood above, make same similar surface and contour plots for the posterior.
```
posterior1 = likelihood1*pi1/(likelihood1*pi1+likelihood2*pi2)
posterior2 = likelihood2*pi2/(likelihood1*pi1+likelihood2*pi2)
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(121, projection='3d')
plt.title('Posterior')
ax.plot_surface(x,y,posterior1, cmap=cm.Reds, alpha = 0.5)
ax.plot_surface(x,y,posterior2, cmap=cm.Blues, alpha = 0.5)
plt.xlabel('x1')
plt.ylabel('x2')
plt.subplot(122)
plt.title('Contour plot of Posterior')
plt.contourf(x, y, posterior1, cmap=cm.Reds, alpha = 0.5)
plt.contourf(x, y, posterior2, cmap=cm.Blues, alpha = 0.5)
plt.xlabel('x1')
plt.ylabel('x2')
```
# Decision boundary
1. Decision boundary can be obtained by $P(C_2|x)>P(C_1|x)$ in python. Use $\texttt{contourf}$ to plot the decision boundary. Use $\texttt{cmap=cm.Blues}$ and $\texttt{alpha=0.5}$
1. Also overlay the scatter plot of train data points from the 2 classes on the same plot. Use red color for class 1 and blue color for class 2
```
decision = posterior2>posterior1
plt.figure(figsize=(10,10))
plt.contourf(x, y, decision, cmap=cm.Blues, alpha = 0.5)
plt.scatter(class1_train[:,0], class1_train[:,1], color = 'red', label = 'Class 1')
plt.scatter(class2_train[:,0], class2_train[:,1], color = 'blue', label = 'Class 2')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc = 'best')
plt.show()
```
# Test Data
Now let's use our trained model to classify test data points
1. $\texttt{test_data}$ contains the $x1,x2$ features of different data points
1. $\texttt{test_label}$ contains the true class of the data points. 0 means class 1. 1 means class 2.
1. Classify the test points based on whichever class has higher posterior probability for each data point
1. Use $\texttt{classification_report}$ to test the classification performance
```
test = pd.read_csv('https://raw.githubusercontent.com/shala2020/shala2020.github.io/master/Lecture_Materials/Assignments/MachineLearning/L3/test').to_numpy()
test_data, test_label = test[:,:2], test[:,2]
# classfication
l1 = pi1*rv1.pdf(test_data)
l2 = pi2*rv2.pdf(test_data)
den = l1+l2
l1 /= den
l2 /= den
test_decision = l2>l1
print(classification_report(test_label, test_decision))
```
|
github_jupyter
|
# BLU02 - Learning Notebook - Data wrangling workflows - Part 2 of 3
```
import matplotlib.pyplot as plt
import pandas as pd
import os
```
# 2 Combining dataframes in Pandas
## 2.1 How many programs are there per season?
How many different programs does the NYP typically present per season?
Programs are under `/data/programs/` which contains a file per Season.
### Concatenate
To analyze how many programs there are per season, over time, we need a single dataframe containing *all* seasons.
Concatenation means, in short, to unite multiple dataframes (or series) in one.
The `pd.concat()` function performs concatenation operations along an axis (`axis=0` for index and `axis=1` for columns).
```
season_0 = pd.read_csv('./data/programs/1842-43.csv')
season_1 = pd.read_csv('./data/programs/1843-44.csv')
seasons = [season_0, season_1]
pd.concat(seasons, axis=1)
```
Concatenating like this makes no sense, as we no longer have a single observation per row.
What we want to do instead is to concatenate the dataframe along the index.
```
pd.concat(seasons, axis=0)
```
This dataframe looks better, but there's something weird with the index: it's not unique anymore.
Different observations share the same index. Not cool.
For dataframes that don't have a meaningful index, you may wish to ignore the indexes altogether.
```
pd.concat(seasons, axis=0, ignore_index=True)
```
Now, let's try something different.
Let's try to change the name of the columns, so that each dataframe has different ones, before concatenating.
```
season_0_ = season_0.copy()
season_0_.columns = [0, 1, 2, 'Season']
seasons_ = [season_0_, season_1]
pd.concat(seasons_, axis=0)
```
What a mess! What did we learn?
* When the dataframes have different columns, `pd.concat()` will take the union of all dataframes by default (no information loss)
* Concatenation will fill columns that are not present for specific dataframes with `np.NaN` (missing values).
The good news is that you can set how you want to glue the dataframes in regards to the other axis, the one not being concatenated.
Setting `join='inner'` will take the intersection, i.e., the columns that are present in all dataframes.
```
pd.concat(seasons_, axis=0, join='inner')
```
There you go. Concatenation complete.
### Append
The method `df.append()` is a shortcut for `pd.concat()`, that can be called on either a `pd.DataFrame` or a `pd.Series`.
```
season_0.append(season_1)
```
It can take multiple objects to concatenate as well. Please note the `ignore_index=True`.
```
season_2 = pd.read_csv('./data/programs/1844-45.csv')
more_seasons = [season_1, season_2]
season_0.append(more_seasons, ignore_index=True)
```
We are good to go. Let's use `pd.concat` to combine all seasons into a great dataframe.
```
def read_season(file):
path = os.path.join('.', 'data', 'programs', file)
return pd.read_csv(path)
files = os.listdir('./data/programs/')
files = [f for f in files if '.csv' in f]
```
A logical approach would be to iterate over all files and appending all of them to a single dataframe.
```
%%timeit
programs = pd.DataFrame()
for file in files:
season = read_season(file)
programs = programs.append(season, ignore_index=True)
```
It is worth noting that both `pd.concat()` and `df.append()` make a full copy of the data and continually reusing this function can create a significant performance hit.
Instead, use a list comprehension if you need to use the operation several times.
This way, you only call `pd.concat()` or `df.append()` once.
```
%%timeit
seasons = [read_season(f) for f in files if '.csv' in f]
programs = pd.concat(seasons, axis=0, ignore_index=True)
seasons = [read_season(f) for f in files if '.csv' in f]
programs = pd.concat(seasons, axis=0, ignore_index=True)
```
Now that we have the final `programs` dataframe, we can see how the number of distinct programs changes over time.
```
programs['Season'] = pd.to_datetime(programs['Season'].str[:4])
(programs.groupby('Season')
.size()
.plot(legend=False, use_index=True, figsize=(10, 7),
title='Number of programs per season (from 1842-43 to 2016-17)'));
```
The NYP appears to be investing in increasing the number of distinct programs per season since '95.
## 2.2 How many concerts are there per season?
What about the number of concerts? The first thing we need to do is to import the `concerts.csv` data.
```
concerts = pd.read_csv('./data/concerts.csv')
concerts.head()
```
We will use the Leon Levy Digital Archives ID (`GUID`) to identify each program.
Now, we have information regarding all the concerts that took place and the season for each program.
The problem? Information about the concert and the season are in different tables, and the program is the glue between the two. Familiar?
### Merge
Pandas provides high-performance join operations, very similar to SQL.
The method `df.merge()` method provides an interface for all database-like join methods.
```
?pd.merge
```
We can call `pd.merge` to join both tables on the `GUID` (and the `ProgramID`, that provides similar info).
```
# Since GUID and ProgramID offer similar info, we will drop the later.
programs = programs.drop(columns='ProgramID')
df = pd.merge(programs, concerts, on='GUID')
df.head()
```
Or, alternatively, we can call `merge()` directly on the dataframe.
```
df_ = programs.merge(concerts, on='GUID')
df_.head()
```
The critical parameter here is the `how`. Since we are not explicitly using it, the merge default to `inner` (for inner-join) by default.
But, in fact, you can use any join, just like you did in SQL: `left`, `right`, `outer` and `inner`.
Remember?

*Fig. 1 - Types of joins in SQL, note how left, right, outer and inner translate directly to Pandas.*
A refresher on different types of joins, all supported by Pandas:
| Pandas | SQL | What it does |
| ---------------------------------------------- | ---------------- | ----------------------------------------- |
| `pd.merge(right, left, on='key', how='left')` | LEFT OUTER JOIN | Use all keys from left frame only |
| `pd.merge(right, left, on='key', how='right')` | RIGHT OUTER JOIN | Use all keys from right frame only |
| `pd.merge(right, left, on='key', how='outer')` | FULL OUTER JOIN | Use union of keys from both frames |
| `pd.merge(right, left, on='key', how='inner')` | INNER JOIN | Use intersection of keys from both frames |
In this particular case, we have:
* A one-to-many relationship (i.e., one program to many concerts)
* Since every single show in `concerts` has a match in `programs`, the type of join we use doesn't matter.
We can use the `validate` argument to automatically check whether there are unexpected duplicates in the merge keys and check their uniqueness.
```
df__ = pd.merge(programs, concerts, on='GUID', how='outer', validate="one_to_many")
assert(concerts.shape[0] == df_.shape[0] == df__.shape[0])
```
Back to our question, how is the number of concerts per season evolving?
```
(programs.merge(concerts, on='GUID')
.groupby('Season')
.size()
.plot(legend=False, use_index=True, figsize=(10, 7),
title='Number of concerts per season (from 1842-43 to 2016-17)'));
```
Likewise, the number of concerts seems to be trending upwards since about 1995, which could be a sign of growing interest in the genre.
### Join
Now, we want the top-3 composer in total appearances.
Without surprise, we start by importing `works.csv`.
```
works = pd.read_csv('./data/works.csv',index_col='GUID')
```
Alternatively, we can use `df.join()` instead of `df.merge()`.
There are, however, differences in the default behavior: for example `df.join` uses `how='left'` by default.
Let's try to perform the merge.
```
(programs.merge(works, on="GUID")
.head(n=3))
programs.merge(works, on="GUID").shape
(programs.join(works, on='GUID')
.head(n=3))
# equivalent to
# pd.merge(programs, works, left_on='GUID', right_index=True,
# how='left').head(n=3)
programs.join(works, on="GUID").shape
```
We noticed that the shape of the results is diferent, we have a different number of lines in each one of the methods.
Typically, you would use `df.join()` when you want to do a left join or when you want to join on the index of the dataframe on the right.
Now for our goal: what are the top-3 composers?
```
(programs.join(works, on='GUID')
.groupby('ComposerName')
.size()
.nlargest(n=3))
```
Wagner wins!
What about the top-3 works?
```
(programs.join(works, on='GUID')
.groupby(['ComposerName', 'WorkTitle'])
.size()
.nlargest(n=3))
```
Wagner wins three times!
|
github_jupyter
|
```
%matplotlib inline
```
# Faces dataset decompositions
This example applies to `olivetti_faces` different unsupervised
matrix decomposition (dimension reduction) methods from the module
:py:mod:`sklearn.decomposition` (see the documentation chapter
`decompositions`) .
```
print(__doc__)
# Authors: Vlad Niculae, Alexandre Gramfort
# License: BSD 3 clause
import logging
from time import time
from numpy.random import RandomState
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_olivetti_faces
from sklearn.cluster import MiniBatchKMeans
from sklearn import decomposition
# Display progress logs on stdout
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(levelname)s %(message)s')
n_row, n_col = 2, 3
n_components = n_row * n_col
image_shape = (64, 64)
rng = RandomState(0)
# #############################################################################
# Load faces data
dataset = fetch_olivetti_faces(shuffle=True, random_state=rng)
faces = dataset.data
n_samples, n_features = faces.shape
# global centering
faces_centered = faces - faces.mean(axis=0)
# local centering
faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
def plot_gallery(title, images, n_col=n_col, n_row=n_row):
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
plt.suptitle(title, size=16)
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
vmax = max(comp.max(), -comp.min())
plt.imshow(comp.reshape(image_shape), cmap=plt.cm.gray,
interpolation='nearest',
vmin=-vmax, vmax=vmax)
plt.xticks(())
plt.yticks(())
plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.04, 0.)
# #############################################################################
# List of the different estimators, whether to center and transpose the
# problem, and whether the transformer uses the clustering API.
estimators = [
('Eigenfaces - PCA using randomized SVD',
decomposition.PCA(n_components=n_components, svd_solver='randomized',
whiten=True),
True),
('Non-negative components - NMF',
decomposition.NMF(n_components=n_components, init='nndsvda', tol=5e-3),
False),
('Independent components - FastICA',
decomposition.FastICA(n_components=n_components, whiten=True),
True),
('Sparse comp. - MiniBatchSparsePCA',
decomposition.MiniBatchSparsePCA(n_components=n_components, alpha=0.8,
n_iter=100, batch_size=3,
random_state=rng),
True),
('MiniBatchDictionaryLearning',
decomposition.MiniBatchDictionaryLearning(n_components=15, alpha=0.1,
n_iter=50, batch_size=3,
random_state=rng),
True),
('Cluster centers - MiniBatchKMeans',
MiniBatchKMeans(n_clusters=n_components, tol=1e-3, batch_size=20,
max_iter=50, random_state=rng),
True),
('Factor Analysis components - FA',
decomposition.FactorAnalysis(n_components=n_components, max_iter=2),
True),
]
# #############################################################################
# Plot a sample of the input data
plot_gallery("First centered Olivetti faces", faces_centered[:n_components])
# #############################################################################
# Do the estimation and plot it
for name, estimator, center in estimators:
print("Extracting the top %d %s..." % (n_components, name))
t0 = time()
data = faces
if center:
data = faces_centered
estimator.fit(data)
train_time = (time() - t0)
print("done in %0.3fs" % train_time)
if hasattr(estimator, 'cluster_centers_'):
components_ = estimator.cluster_centers_
else:
components_ = estimator.components_
# Plot an image representing the pixelwise variance provided by the
# estimator e.g its noise_variance_ attribute. The Eigenfaces estimator,
# via the PCA decomposition, also provides a scalar noise_variance_
# (the mean of pixelwise variance) that cannot be displayed as an image
# so we skip it.
if (hasattr(estimator, 'noise_variance_') and
estimator.noise_variance_.ndim > 0): # Skip the Eigenfaces case
plot_gallery("Pixelwise variance",
estimator.noise_variance_.reshape(1, -1), n_col=1,
n_row=1)
plot_gallery('%s - Train time %.1fs' % (name, train_time),
components_[:n_components])
plt.show()
```
|
github_jupyter
|
Wayne H Nixalo - 09 Aug 2017
This JNB is an attempt to do the neural artistic style transfer and super-resolution examples done in class, on a GPU using PyTorch for speed.
Lesson NB: [neural-style-pytorch](https://github.com/fastai/courses/blob/master/deeplearning2/neural-style-pytorch.ipynb)
## Neural Style Transfer
Style Transfer / Super Resolution Implementation in PyTorch
```
%matplotlib inline
import importlib
import os, sys; sys.path.insert(1, os.path.join('../utils'))
from utils2 import *
import torch, torch.nn as nn, torch.nn.functional as F, torch.optim as optim
from torch.autograd import Variable
from torch.utils.serialization import load_lua
from torch.utils.data import DataLoader
from torchvision import transforms, models, datasets
```
### Setup
```
path = '../data/nst/'
fnames = pickle.load(open(path+'fnames.pkl','rb'))
img = Image.open(path + fnames[0]); img
rn_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32).reshape((1,1,1,3))
preproc = lambda x: (x - rn_mean)[:,:,:,::-1]
img_arr = preproc(np.expand_dims(np.array(img),0))
shp = img_arr.shape
deproc = lambda x: x[:,:,:,::-1] + rn_mena
```
### Create Model
```
def download_convert_vgg16_model():
model_url = 'http://cs.stanford.edu/people/jcjohns/fast-neural-style/models/vgg16.t7'
file = get_file(model_url, cache_subdir='models')
vgglua = load_lua(file).parameters()
vgg = models.VGGFeature()
for (src, dst) in zip(vgglua[0], vgg.parameters()): dst[:] = src[:]
torch.save(vgg.state_dict(), path + 'vgg16_feature.pth')
url = 'https://s3-us-west-2.amazonaws.com/jcjohns-models/'
fname = 'vgg16-00b39a1b.pth'
file = get_file(fname, url+fname, cache_subdir='models')
vgg = models.vgg.vgg16()
vgg.load_state_dict(torch.load(file))
optimizer = optim.Adam(vgg.parameters())
vgg.cuda();
arr_lr = bcolz.open(path + 'trn_resized_72.bc')[:]
arr_hr = bcolz.open(path + 'trn_resized_288.bc')[:]
arr = bcolz.open(dpath + 'trn_resized.bc')[:]
x = Variable(arr[0])
y = model(x)
url = 'http://www.files.fast.ai/models/'
fname = 'imagenet_class_index.json'
fpath = get_file(fname, url + fname, cache_subdir='models')
class ResidualBlock(nn.Module):
def __init__(self, num):
super(ResideualBlock, self).__init__()
self.c1 = nn.Conv2d(num, num, kernel_size=3, stride=1, padding=1)
self.c2 = nn.Conv2d(num, num, kernel_size=3, stride=1, padding=1)
self.b1 = nn.BatchNorm2d(num)
self.b2 = nn.BatchNorm2d(num)
def forward(self, x):
h = F.relu(self.b1(self.c1(x)))
h = self.b2(self.c2(h))
return h + x
class FastStyleNet(nn.Module):
def __init__(self):
super(FastStyleNet, self).__init__()
self.cs = [nn.Conv2d(3, 32, kernel_size=9, stride=1, padding=4),
nn.Conv2d(32, 64, kernel_size=4, stride=2, padding=1),
nn.Conv2d(64, 128, kernel_size=4, stride=2, padding1)]
self.b1s = [nn.BatchNorm2d(i) for i in [32, 64, 128]]
self.rs = [ResidualBlock(128) for i in range(5)]
self.ds = [nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1),
nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1)]
self.b2s = [nn.BatchNorm2d(i) for i in [64, 32]]
self.d3 = nn.Conv2d(32, 3, kernel_size=9, stride=1, padding=4)
def forward(self, h):
for i in range(3): h = F.relu(self.b1s[i](self.cs[i](x)))
for r in self.rs: h = r(h)
for i in range(2): h = F.relu(self.b2s[i](self.ds[i](x)))
return self.d3(h)
```
### Loss Functions and Processing
|
github_jupyter
|
<a href="https://colab.research.google.com/github/gabilodeau/INF6804/blob/master/FeatureVectorsComp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
INF6804 Vision par ordinateur
Polytechnique Montréal
Distances entre histogrammes (L1, L2, MDPA, Bhattacharyya)
```
import numpy as np
import cv2
import matplotlib.pyplot as plt
from sklearn.metrics.pairwise import cosine_similarity
```
Fonction pour calculer la distance MDPA
```
def distMDPA(V1, V2):
Dist=0;
for i in range(0,len(V1)):
dint=0;
for j in range(0,i):
dint=dint+V1[j]-V2[j]
Dist=Dist+abs(dint)
return Dist;
```
Création de 5 vecteurs. On comparera avec Vecteur1 comme base.
```
Vecteur1 = np.array([3.0, 4.0, 3.0, 1.0, 6.0])
Vecteur2 = np.array([2.0, 5.0, 3.0, 1.0, 6.0])
Vecteur3 = np.array([2.0, 4.0, 3.0, 1.0, 7.0])
Vecteur4 = np.array([1.0, 5.0, 4.0, 1.0, 6.0])
Vecteur5 = np.array([3.0, 5.0, 2.0, 2.0, 5.0])
```
Distance ou norme L1. Les résultats seront affichés sur un graphique.
```
dist1 = cv2.norm(Vecteur1, Vecteur2, cv2.NORM_L1)
dist2 = cv2.norm(Vecteur1, Vecteur3, cv2.NORM_L1)
dist3 = cv2.norm(Vecteur1, Vecteur4, cv2.NORM_L1)
dist4 = cv2.norm(Vecteur1, Vecteur5, cv2.NORM_L1)
#Pour affichage...
x = [0, 0.1, 0.2, 0.3]
color = ['r','g','b','k']
dist = [dist1, dist2, dist3, dist4]
```
Distance ou norme L2.
```
dist1 = cv2.norm(Vecteur1, Vecteur2, cv2.NORM_L2)
dist2 = cv2.norm(Vecteur1, Vecteur3, cv2.NORM_L2)
dist3 = cv2.norm(Vecteur1, Vecteur4, cv2.NORM_L2)
dist4 = cv2.norm(Vecteur1, Vecteur5, cv2.NORM_L2)
x = x + [1, 1.1, 1.2, 1.3]
dist = dist + [dist1, dist2, dist3, dist4]
color = color + ['r','g','b','k']
```
Distance MDPA (Maximum distance of pair assignments).
```
dist1 = distMDPA(Vecteur1, Vecteur2)
dist2 = distMDPA(Vecteur1, Vecteur3)
dist3 = distMDPA(Vecteur1, Vecteur4)
dist4 = distMDPA(Vecteur1, Vecteur5)
x = x + [2, 2.1, 2.2, 2.3]
dist = dist + [dist1, dist2, dist3, dist4]
color = color + ['r','g','b','k']
```
Distance de Bhattacharyya avec les valeurs normalisées entre 0 et 1.
```
Vecteur1 = Vecteur1/np.sum(Vecteur1)
Vecteur2 = Vecteur2/np.sum(Vecteur2)
Vecteur3 = Vecteur3/np.sum(Vecteur3)
Vecteur4 = Vecteur4/np.sum(Vecteur3)
dist1 = cv2.compareHist(Vecteur1.transpose().astype('float32'), Vecteur2.transpose().astype('float32'), cv2.HISTCMP_BHATTACHARYYA)
dist2 = cv2.compareHist(Vecteur1.transpose().astype('float32'), Vecteur3.transpose().astype('float32'), cv2.HISTCMP_BHATTACHARYYA)
dist3 = cv2.compareHist(Vecteur1.transpose().astype('float32'), Vecteur4.transpose().astype('float32'), cv2.HISTCMP_BHATTACHARYYA)
dist4 = cv2.compareHist(Vecteur1.transpose().astype('float32'), Vecteur5.transpose().astype('float32'), cv2.HISTCMP_BHATTACHARYYA)
x = x + [3, 3.1, 3.2, 3.3]
dist = dist + [dist1, dist2, dist3, dist4]
color = color + ['r','g','b', 'k']
```
Similarité cosinus.
```
dist1 = cosine_similarity(Vecteur1.reshape(1, -1), Vecteur2.reshape(1, -1))
dist2 = cosine_similarity(Vecteur1.reshape(1, -1), Vecteur3.reshape(1, -1))
dist3 = cosine_similarity(Vecteur1.reshape(1, -1), Vecteur4.reshape(1, -1))
dist4 = cosine_similarity(Vecteur1.reshape(1, -1), Vecteur5.reshape(1, -1))
x = x + [4, 4.1, 4.2, 4.3]
dist = dist + [dist1, dist2, dist3, dist4]
color = color + ['r','g','b', 'k']
```
Affichage des distances.
```
plt.scatter(x, dist, c = color)
plt.text(0,0, 'Distance L1')
plt.text(0.8,1, 'Distance L2')
plt.text(1.6,0, 'Distance MDPA')
plt.text(2.6,0.5, 'Bhattacharyya')
plt.text(3.8,0.3, 'Similarité\n cosinus')
plt.show()
```
|
github_jupyter
|
# The IPython widgets, now in IHaskell !!
It is highly recommended that users new to jupyter/ipython take the *User Interface Tour* from the toolbar above (Help -> User Interface Tour).
> This notebook introduces the [IPython widgets](https://github.com/ipython/ipywidgets), as implemented in [IHaskell](https://github.com/gibiansky/IHaskell). The `Button` widget is also demonstrated as a live action example.
### The Widget Hierarchy
These are all the widgets available from IPython/Jupyter.
#### Uncategorized Widgets
+ Button
+ Image*Widget*
+ Output*Widget*
#### Box Widgets
+ Box
+ FlexBox
+ Accordion
+ Tab*Widget*
#### Boolean Widgets
+ CheckBox
+ ToggleButton
#### Integer Widgets
+ IntText
+ BoundedIntText
+ IntProgress
+ IntSlider
+ IntRangeSlider
#### Float Widgets
+ FloatText
+ BoundedFloatText
+ FloatProgress
+ FloatSlider
+ FloatRangeSlider
#### Selection Widgets
+ Selection
+ Dropdown
+ RadioButtons
+ Select
+ SelectMultiple
+ ToggleButtons
#### String Widgets
+ HTML*Widget*
+ Latex*Widget*
+ TextArea
+ Text*Widget*
### Using Widgets
#### Necessary Extensions and Imports
All the widgets and related functions are available from a single module, `IHaskell.Display.Widgets`. It is strongly recommended that users use the `OverloadedStrings` extension, as widgets make extensive use of `Text`.
```
{-# LANGUAGE OverloadedStrings #-}
import IHaskell.Display.Widgets
```
The module can be imported unqualified. Widgets with common names, such as `Text`, `Image` etc. have a `-Widget` suffix to prevent name collisions.
#### Widget interface
Each widget has different properties, but the surface level API is the same.
Every widget has:
1. A constructor:
An `IO <widget>` value/function of the form `mk<widget_name>`.
2. A set of properties, which can be manipulated using `setField` and `getField`.
The `setField` and `getField` functions have nasty type signatures, but they can be used by just intuitively understanding them.
```
:t setField
```
The `setField` function takes three arguments:
1. A widget
2. A `Field`
3. A value for the `Field`
```
:t getField
```
The `getField` function takes a `Widget` and a `Field` and returns the value of that `Field` for the `Widget`.
Another utility function is `properties`, which shows all properties of a widget.
```
:t properties
```
#### Displaying Widgets
IHaskell automatically displays anything *displayable* given to it directly.
```
-- Showables
1 + 2
"abc"
```
Widgets can either be displayed this way, or explicitly using the `display` function from `IHaskell.Display`.
```
import IHaskell.Display
:t display
```
#### Multiple displays
A widget can be displayed multiple times. All these *views* are representations of a single object, and thus are linked.
When a widget is created, a model representing it is created in the frontend. This model is used by all the views, and any modification to it propagates to all of them.
#### Closing widgets
Widgets can be closed using the `closeWidget` function.
```
:t closeWidget
```
### Our first widget: `Button`
Let's play with buttons as a starting example:
As noted before, all widgets have a constructor of the form `mk<Widget>`. Thus, to create a `Button`, we use `mkButton`.
```
button <- mkButton -- Construct a Button
:t button
```
Widgets can be displayed by just entering them into a cell.
```
button -- Display the button
```
To view a widget's properties, we use the `properties` function. It also shows the type represented by the `Field`, which generally are not visible in type signatures due to high levels of type-hackery.
```
-- The button widget has many properties.
properties button
```
Let's try making the button widget wider.
```
import qualified IHaskell.Display.Widgets.Layout as L
btnLayout <- getField button Layout
setField btnLayout L.Width $ Just "100%"
```
There is a lot that can be customized. For example:
```
setField button Description "Click Me (._.\")"
setField button ButtonStyle SuccessButton
setField btnLayout L.Border $ Just "ridge 2px"
setField btnLayout L.Padding $ Just "10"
setField btnLayout L.Height $ Just "7em"
```
The button widget also provides a click handler. We can make it do anything, except console input. Universally, no widget event can trigger console input.
```
setField button ClickHandler $ putStrLn "fO_o"
button -- Displaying again for convenience
```
Now try clicking the button, and see the output.
> Note: If you display to stdout using Jupyter Lab, it will be displayed in a log entry, not as the cell output.
We can't do console input, though, but you can always use another widget! See the other notebooks with examples for more information
```
setField button ClickHandler $ getLine >>= putStrLn
```
|
github_jupyter
|
# Contanimate DNS Data
```
"""
Make dataset pipeline
"""
import pandas as pd
import numpy as np
import os
from collections import Counter
import math
import torch
from torch.utils.data import DataLoader
from torch.nn.utils.rnn import pad_sequence
from dga.models.dga_classifier import DGAClassifier
from dga.datasets.domain_dataset import DomainDataset
!pip install tldextract
import tldextract
df = pd.read_csv("../data/raw/dns.csv")
a_aaaa_df = df.loc[(df.qtype_name == 'A') | (df.qtype_name == 'AAAA')]
# Take subset by nxdomain response
nxdomain_df = a_aaaa_df.loc[(df['rcode_name'] == 'NXDOMAIN')]
# Drop subset from full records
a_aaaa_df = a_aaaa_df[a_aaaa_df['rcode_name'] != 'NXDOMAIN']
# Load known DGAs
mal_df = pd.read_csv("../data/processed/validation.csv")
mal_df = mal_df.loc[mal_df['label'] == 1]
# Inject dga domains randomly
nxdomain_df['query'] = np.random.choice(list(mal_df['domain'].values), len(nxdomain_df))
# Put dataset back together
a_aaaa_df = pd.concat([a_aaaa_df, nxdomain_df])
# a_aaaa_df['domain_name'] = a_aaaa_df['query'].str.replace('www.', '')
a_aaaa_df.drop(['QR', 'AA', 'TC', 'RD', 'Z', 'answers'], axis=1, inplace=True)
a_aaaa_df.sort_values(by=['ts'])
# a_aaaa_df['domain_name'].unique()
a_aaaa_df = a_aaaa_df.reset_index(drop=True)
def extract_domain(url):
return tldextract.extract(url).domain
a_aaaa_df['domain'] = a_aaaa_df['query'].apply(extract_domain)
def extract_tld(url):
return tldextract.extract(url).suffix
a_aaaa_df['tld'] = a_aaaa_df['query'].apply(extract_tld)
a_aaaa_df['domain_name'] = a_aaaa_df['domain'] + '.' + a_aaaa_df['tld']
a_aaaa_df.head()
model_dir = '../models/'
model_info = {}
model_info_path = os.path.join(model_dir, '1595825381_dga_model_info.pth')
with open(model_info_path, 'rb') as f:
model_info = torch.load(f)
print("model_info: {}".format(model_info))
# Determine the device and construct the model.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = DGAClassifier(input_features=model_info['input_features'],
hidden_dim=model_info['hidden_dim'],
n_layers=model_info['n_layers'],
output_dim=model_info['output_dim'],
embedding_dim=model_info['embedding_dim'],
batch_size=model_info['batch_size'])
# Load the stored model parameters.
model_path = os.path.join(model_dir, '1595825381_dga_model.pth')
with open(model_path, 'rb') as f:
model.load_state_dict(torch.load(f))
# set to eval mode, could use no_grad
model.to(device).eval()
def entropy(s):
p, lns = Counter(s), float(len(s))
return -sum( count/lns * math.log(count/lns, 2) for count in p.values())
def pad_collate_pred(batch):
x_lens = [len(x) for x in batch]
xx_pad = pad_sequence(batch, batch_first=True, padding_value=0)
return xx_pad, x_lens
def get_predict_loader(batch_size, df):
print("Getting test and train data loaders.")
dataset = DomainDataset(df, train=False)
predict_dl = DataLoader(dataset, batch_size=batch_size, shuffle=False, collate_fn=pad_collate_pred)
return predict_dl
def get_prediction(df):
predict_dl = get_predict_loader(1000, df)
classes = {0: 'Benign', 1: 'DGA'}
model.eval()
predictions = []
with torch.no_grad():
for batch_num, (x_padded, x_lens) in enumerate(predict_dl):
output = model(x_padded, x_lens)
y_hat = torch.round(output.data)
predictions += [classes[int(key)] for key in y_hat.flatten().numpy()]
return predictions
a_aaaa_df = a_aaaa_df[~a_aaaa_df['domain_name'].str.contains('\(')].reset_index(drop=True)
a_aaaa_df = a_aaaa_df[~a_aaaa_df['domain_name'].str.contains(',')].reset_index(drop=True)
a_aaaa_df[['domain_name']]
a_aaaa_df['dga'] = get_prediction(a_aaaa_df[['domain_name']])
a_aaaa_df['entropy'] = a_aaaa_df['domain_name'].apply(entropy)
print(a_aaaa_df.shape)
a_aaaa_df.head(25)
a_aaaa_df.to_csv('../data/processed/demo_dns_logs.csv', index=False)
```
|
github_jupyter
|
# Visualize the best RFE conformations using cMDS plots
```
import pandas as pd
import numpy as np
import sys
sys.path.append('../..')
from helper_modules.run_or_load import *
from helper_modules.MDS import *
```
### Load protein related data
```
prot_name = 'fxa'
DIR = '../1_Download_and_prepare_protein_ensembles'
path_to_file = f'{DIR}/TABLA_MTDATA_FXA_136_crys_LIGS_INFO.json'
df_prot = pd.read_json(path_to_file)
df_prot.head(3)
```
### Load the dimensionality reduction results
```
df_dims = pd.read_pickle('../3_Protein_Ensembles_Analysis/df_PROTEINS_DIMS_reduced_TABLE.obj')
# Update the df with the mds axis
# Pocket shape
df_prot['vol_x'] = df_dims['mds_vol_pkt_x']
df_prot['vol_y'] = df_dims['mds_vol_pkt_y']
# secondary structure residues RMSD
df_prot['secres_x'] = df_dims['mds_sec_x']
df_prot['secres_y'] = df_dims['mds_sec_y']
# pocket residues RMSD
df_prot['pkt_x'] = df_dims['mds_pkt_x']
df_prot['pkt_y'] = df_dims['mds_pkt_y']
df_prot.head(3)
```
### Load POVME3 results and single-conformation docking performances (AUC-ROC)
```
# Extra features to get volume or surface area
df_extra = pd.read_pickle(f'../4_Ensemble_docking_results/TABLE_Confs_Features_and_performances_fxa.pkl')
# Adding to the main df
df_prot['volume'] = df_extra['Pk. Volume']
df_prot['surf_area'] = df_extra['Pk. SASA']
# ROC-AUC single performance
df_prot['AUC-ROC'] = df_extra['AUC-ROC']
df_prot.head(3)
```
### Load *Recursive Feature Elimination* results
```
# Open RFE_estimator
# Open RFE_estimator
dataset = 'MERGED'
model_name = 'XGB_tree'
split = 'random'
filename = f'./cachedir/rfe_selectors/RFE_xgb_{prot_name}.joblib'
# Load the RFE selector (computed in the previos notebook)
rfe_selector = joblib.load(filename)
# Create a dataframe with the protein rankings
df_ranks = pd.DataFrame({
'pdb_id' : df_prot.index,
'rfe_ranking': rfe_selector.ranking_
})
df_ranks = df_ranks.sort_values('rfe_ranking').set_index('pdb_id')
# Update the df with the rank values
df_prot = df_prot.merge(df_ranks, left_index=True, right_index=True)\
.sort_values('rfe_ranking')
df_prot.head(3)
```
## cMDS plots
We will use `ggplot2` for plotting
```
%load_ext rpy2.ipython
```
Just a few modifications for visualization purposes.
```
# To be able to plot confs with no inhibitors => NA == 10
df_prot['Inhib_mass_num'] = pd.to_numeric(df_prot['Inhib_mass']).\
fillna(10) ** 2
df_prot['volume.T'] = (df_prot['volume']/100) ** 1.5
df_selected = df_prot.sort_values('rfe_ranking').head(16)
x = 'vol_x'
y = 'vol_y'
size='volume.T'
```
#### Create the dataframe for plotting
```
# This is the final table for plotting
df_volpk = df_prot[['rfe_ranking', 'vol_x', 'vol_y', 'volume']]
df_volpk = df_volpk.rename({'vol_x': 'x', 'vol_y': 'y'}, axis = 1)
df_volpk
%%R -i df_volpk -i prot_name -w 4. -h 4. --units in -r 200
source('../../R_scripts/plot_cMDS.R')
prot_name <- prot_name
p <- plot_cMDS(df_volpk)
# Save the picture
space <- 'povme'
methodology <- 'MDS_plots/'
save_path = '~/Documents/Doctorado/Paper_doctorado/Response_to_reviewers/Figuras_mayor_review/raw_imgs/'
filename <- paste0(save_path, methodology,
paste(prot_name, space, 'MDS.pdf', sep='_'))
ggsave(filename, plot=p, width=4., height= 4.)
print(p)
```
## Swarplot with the AUC-ROC values per conformation
- The following plot show the distribution of the protein conformations regarding its AUC-ROC value computed from their individual docking results.
```
import matplotlib
import seaborn as sns
import matplotlib.ticker as ticker
from matplotlib.colors import LinearSegmentedColormap
top_confs = 8
# Define the colormap
cmap = LinearSegmentedColormap.from_list(
name ='test',
colors = ["red", "orange", "#374E55"],
N = top_confs
)
matplotlib.cm.register_cmap("mycolormap", cmap)
sns.set(font_scale = 1.1, style = 'whitegrid')
# Filter the
df_ = df_prot.copy()
# Get the top 16
df_['top_mask'] = [2 if i <= top_confs else
1 for i in df_['rfe_ranking']]
df_ = df_[['AUC-ROC', 'top_mask', 'rfe_ranking']]\
.melt(id_vars=('top_mask',
'rfe_ranking'))
fig, ax = plt.subplots(figsize=(2.2, 4.45))
# Blue dots (all conformations)
np.random.seed(2)
sns.swarmplot(y = 'value',
x = 'variable',
data = df_,
size = 4.6,
ax = ax,
color = '#87DADE')
# Plot the top RFE 16 conformations
df_top = df_.query('top_mask == 2')
np.random.seed(2)
sns.swarmplot(y = 'value',
x = 'variable',
data = df_top,
size = 5,
ax = ax,
hue ='rfe_ranking',
edgecolor = 'black',
linewidth = 0.5,
palette = 'mycolormap')
# Axis and labels
ax.set_yticks(np.arange(0.5, 0.70, .05))
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%0.2f'))
ax.yaxis.tick_left()
ax.get_legend().remove()
ax.tick_params(length = 2, color = 'black', axis = 'y')
ax.grid(True, linewidth = 0.7)
ax.tick_params(axis="y",direction="in", pad=-27)
ax.set(xlabel = 'Protein conformations', ylabel = '')
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(0.55)
ax.spines[axis].set_edgecolor('black')
plt.savefig(f'{prot_name}_swarm_auc.pdf')
# Save the picture
plt.show()
top_confs = 8
# Define the colormap
cmap = LinearSegmentedColormap.from_list(
name ='test',
colors = ["red", "orange", "#374E55"],
N = top_confs
)
matplotlib.cm.register_cmap("mycolormap", cmap)
sns.set(font_scale = 0.7, style = 'whitegrid')
# Filter the
df_ = df_prot.copy()
# Get the top 16
df_['top_mask'] = [2 if i <= top_confs else
1 for i in df_['rfe_ranking']]
df_ = df_[['AUC-ROC', 'top_mask', 'rfe_ranking']]\
.melt(id_vars=('top_mask',
'rfe_ranking'))
# Get the AUC-ROC of the 32 lowest conformation
auc_worst_32 = df_['value'].nsmallest(32).max()
df_['worst_32'] = df_['value'] <= auc_worst_32
fig, ax = plt.subplots(figsize=(1.7, 3.52))
# Blue dots (all conformations)
np.random.seed(2)
sns.swarmplot(y = 'value',
x = 'variable',
data = df_,
size = 3.6,
ax = ax,
alpha = 0.7,
hue = 'worst_32',
palette = ['#F0B3B2', '#5CA586'])
# Axis and labels
ax.set_yticks(list(np.arange(0.3, 1.1, .1)) + [auc_worst_32])
ax.get_yticklabels()[-1].set_color("#B24745")
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%0.2f'))
ax.yaxis.tick_left()
ax.get_legend().remove()
plt.axhline(y=0.5, color='darkgrey', linewidth = 1.2, linestyle = '--')
plt.axhline(y=auc_worst_32, color='#79AF97',
linestyle=':', linewidth = 1.2)
ax.fill_between([-1,1], [0], [auc_worst_32], color='#79AF97', alpha = 0.3 )
ax.tick_params(length = 3, color = 'black', axis = 'y')
ax.grid(True, linewidth = 0.7)
# ax.tick_params(axis="y",direction="in", pad=-27)
ax.set_xlabel('SCPs from the entire dataset', fontsize = 8)
ax.set_ylabel('')
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(0.55)
ax.spines[axis].set_edgecolor('black')
plt.ylim(0.265, 1.033)
plt.savefig(f'{prot_name}_swarm_auc.pdf')
# Save the picture
plt.show()
```
## MDS using Secondary structure - Pisani (2016) residues.
- The following projection was computed from the pairwise RMSD matrix of the C$\alpha$ of the residues defined by Pisani (2016).
```
df_secRMSD = df_prot[['rfe_ranking', 'secres_x', 'secres_y', 'volume']]
df_secRMSD = df_secRMSD.rename({'secres_x': 'x', 'secres_y': 'y'}, axis = 1)
%%R -i df_secRMSD -w 3.5 -h 3.5 --units in -r 200
p <- plot_cMDS(df_secRMSD)
# Save the picture
space <- 'secRMSD'
methodology <- 'MDS_plots/'
save_path = '~/Documents/Doctorado/Paper_doctorado/Response_to_reviewers/Figuras_mayor_review/raw_imgs/'
filename <- paste0(save_path, methodology,
paste(prot_name, space, 'MDS.pdf', sep='_'))
ggsave(filename, plot=p, width=4.0, height= 4.0)
print(p)
```
## MDS using pocket residues
```
df_pkRMSD = df_prot[['rfe_ranking', 'pkt_x', 'pkt_y', 'volume']]
df_pkRMSD = df_pkRMSD.rename({'pkt_x': 'x', 'pkt_y': 'y'}, axis = 1)
%%R -i df_pkRMSD -w 4.1 -h 4.1 --units in -r 200
p <- plot_cMDS(df_pkRMSD)
# Save the picture
space <- 'pkRMSD'
methodology <- 'MDS_plots/'
save_path = '~/Documents/Doctorado/Paper_doctorado/Response_to_reviewers/Figuras_mayor_review/raw_imgs/'
filename <- paste0(save_path, methodology,
paste(prot_name, space, 'MDS.pdf', sep='_'))
ggsave(filename, plot=p, width=4.0, height= 4.0)
print(p)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/kuriousk516/HIST4916a-Stolen_Bronzes/blob/main/Stolen_Bronzes.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Stolen Bronzes: Western Museums and Repatriation
## Introduction
>"*Walk into any European museum today and you will see the curated spoils of Empire. They sit behind plate glass: dignified, tastefully lit. Accompanying pieces of card offer a name, date and place of origin. They do not mention that the objects are all stolen*."
>
> 'Radicals in Conversation': The Brutish Museums
Public history and digital humanities offers a locus point of contending with difficult pasts. Museums, often considered bastions of knowledge, learning, and public good have fallen under an increasingly critical gaze -- and rightfully so. Public museums have been tools of colonialism, racism, and superiority centred around the supremacy of the west and its history.
Digital repositories of museum archives and websites can be used to subvert the exclusionary practices employed by museums and provide tools for marginalized peoples --. The purpose of this notebook is to act as a digital tool for real life change, and it is focused on Dan Hick's [Tweet](https://twitter.com/profdanhicks/status/1375421209265983488) and book, *The Brutish Museum*.
```
%%html
<iframe src="https://drive.google.com/file/d/1txSH3UkjJgLTeQW47MGLfrht7AHCEkGC/preview" width="640" height="480"></iframe>
```
What I read in Dan Hicks' Tweet was a call to action. Not necessarily for the average citizen to take the bronzes back, but to start an important discussion about the nature of artifact aqcuisition and confronting how museums procure these items in the first place.
The appendix' list is a small fraction of the stolen artifacts found in hundreds of museums all over the world but it is a powerful point of focus. I want to create something, however small, that can give others the tools to have a visual representation of stolen artifacts distribution and interrogate why (mostly) western museums are the institutions holding these artifacts, what effect this has, and what's being done with them. Can anyone own art? Who has the power to decide? How do we give that power back to those who were stolen from?
To learn more about the Benin bronzes and their history, a good place to start is with the ['Radicals in Conversation'](https://www.plutobooks.com/blog/podcast-brutish-museums-benin-bronzes-decolonisation/) podcast.
And now, what I have here is a helpful tool for all of us to answer, **"*How close are you right this second to a looted Benin Bronze*?"**
# Data
I have compiled a dataframe of all the museums listed in Hicks' appendix'; you can see the original above in his Tweet. The data is in a .CSV file stored in my [GitHub repository](https://github.com/kuriousk516/HIST4916a-Stolen_Bronzes), and you can also find screenshots of the errors I encountered and advice I recieved through the HIST4916a Discord server, some of which I will reference here when discussing data limitations.
## Mapping with Folium
Folium seemed the best choice for this project since it doesn't rely on Google Maps for the map itself or the data entry process. [This is the tutorial](https://craftingdh.netlify.app/tutorials/folium/) that I used for the majority of the data coding, and this is the [Point Map alternative](https://handsondataviz.org/mymaps.html) I considered but decided against.
```
import lxml
import pandas as pd
pd.set_option("max_rows", 400)
pd.set_option("max_colwidth", 400)
import pandas, os
os.listdir()
['.config', 'benin_bronze_locations2.csv', 'sample_data']
```
Here is where I ran into some trouble. I was having great difficulty in loading my .CSV file into the notebook, so I uploaded the file from my computer. Here is the alternative code to upload it using the RAW link from GitHub:
url = 'copied_raw_GH_link'
df1 = pd.read_csv(url)
If you have another (simpler) way of getting the job done, I fully encourage you altering the code to make it happen.
```
from google.colab import files
uploaded = files.upload()
```
In the .CSV file, I only had the name of the museums, cities, and countries. Manually inputting the necessary data for plotting the locations would be time-consuming and tedious, but I have an example using geopy and Nomatim to pull individual location info for the cases when "NaN" pops up when expanding the entire dataframe.
```
df1=pandas.read_csv('benin_bronze_locations2.csv', encoding = "ISO-8859-1", engine ='python')
df1
!pip install geopy
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent="BENIN-BRONZES", timeout=2)
location = geolocator.geocode("Ulster Museum United Kingdom")
location
```
Great! Now we have the means of finding the relevant map information for individual entires. But to process the large amount of data, I followed [this YouTube tutorial](https://www.youtube.com/watch?v=0IjdfgmWzMk) for some extra help.
```
def find_location(row):
place = row['place']
location = geolocator.geocode(place)
if location != None:
return location.address, location.latitude, location.longitude, location.raw['importance']
else:
return "Not Found", "Not Found", "Not Found", "Not Found"
```
To expand on my data, I needed to add a new column to my dataframe -- the addresses of the museums.
```
df1["Address"]=df1["Place"]+", "+df1["City"]+", "+df1["Country"]
df1
#Then I added this string to the geocode to create a coordinates column.
df1["Coordinates"]=df1["Address"].apply(geolocator.geocode)
df1
```
After compiling the addresses and coordinates, the dataframe needed the latitude and longitudes for Folium to plot the locations on the map.
```
df1["Latitude"]=df1["Coordinates"].apply(lambda x: x.latitude if x !=None else None)
df1["Longitude"]=df1["Coordinates"].apply(lambda x: x.longitude if x !=None else None)
df1
!pip install folium
import folium
beninbronze_map = folium.Map(location=[6.3350, 5.6037], zoom_start=7)
beninbronze_map
```
I want Benin City to be the centre of this map, a rough point of origin. The Kingdom of Benin existed in modern day Nigeria, and it's where the looted bronzes belong. Only *nine* locations in Nigeria have collections of the bronzes, as opposed to the 152 others all over Europe, America, Canada, Russia, and Japan. Nigeria needs to be the centre of the conversation of the looted bronzes and repatriation, and so it is the centre of the map being created.
```
def create_map_markers(row, beninbronze_map):
folium.Marker(location=[row['lat'], row['lon']], popup=row['place']).add_to(beninbronze_map)
folium.Marker(location=[6.3350, 5.6037], popup="Send the bronzes home").add_to(beninbronze_map)
beninbronze_map
def create_map_markers(row, beninbronze_map):
folium.Marker(location=[row['Latitude'], row['Longitude']], popup=row['Place']).add_to(beninbronze_map)
```
Many of the data entries came up as "NaN" when the code was trying to find their latitude and longitude. It's an invalid entry and needs to be dropped in order for the map markers to function. This is very important to note: out of the 156 data entries, only 86 were plotted on the map. The missing coordinates need to be added to the dataframe, but that's a bit beyond the scope of this project. I invite anyone with the time to complete the map markers using the code examples above.
```
df1.dropna(subset = ["Latitude"], inplace=True)
df1.dropna(subset = ["Longitude"], inplace=True)
nan_value = float("NaN")
df1.replace("",nan_value, inplace=True)
df1.dropna(subset = ["Latitude"], inplace=True)
df1.dropna(subset = ["Longitude"], inplace=True)
df1
df1.apply(lambda row:folium.CircleMarker(location=[row["Latitude"],
row["Longitude"]]).add_to(beninbronze_map),
axis=1)
beninbronze_map
beninbronze_map.save("stolen-bronzes-map.html")
```
# Conclusion
Now we have a map showing (some of) the locations of the looted Benin bronzes. It needs to be expanded to include the other locations, but I hope it helped you to think about what Dan Hicks' asked: how close are you, right this minute, to a looted Benin bronze?
# Recommended Reading and Points of Reference
Abt, Jeffrey. “The Origins of the Public Museum.” In A Companion to Museum Studies, 115–134. Malden, MA, USA: Blackwell Publishing Ltd, 2006.
Bennett, Tony. 1990. “The Political Rationality of the Museum,” Continuum: The Australian Journal of Media and Culture 2, no. 1 (1990).
Bivens, Joy, and Ben Garcia, Porchia Moore, nikhil trivedi, Aletheia Wittman. 2019. ‘Collections: How We Hold the Stuff We Hold in Trust’ in MASSAction, Museums As Site for Social Action, toolkit, https://static1.squarespace.com/static/58fa685dff7c50f78be5f2b2/t/59dcdd27e5dd5b5a1b51d9d8/1507646780650/TOOLKIT_10_2017.pdf
DW.com. "'A matter of fairness': New debate about Benin Bronzes in Germany." Published March 26, 2021. https://www.dw.com/en/a-matter-of-fairness-new-debate-about-benin-bronzes-in-germany/a-57013604
Hudson, David J. 2016. “On Dark Continents and Digital Divides: Information Inequality and the Reproduction of Racial Otherness in Library and Information Studies” https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9862.
Kreps, Christina. 2008. ‘Non-western Models of Museums and Curation in Cross-cultural Perspective’in Sharon Macdonald, ed. ‘Companion to Museum Studies’.
MacDonald, Sharon. 2008. “Collecting Practices” in Sharon Macdonald, ed. ‘Companion to Museum Studies’.
Sentance, Nathan mudyi. 2018. “Why Do We Collect,” Archival Decolonist blog, August 18, 2018, https://archivaldecolonist.com/2018/08/18/why-do-we-collect/
https://www.danhicks.uk/brutishmuseums
https://www.plutobooks.com/blog/podcast-brutish-museums-benin-bronzes-decolonisation/
|
github_jupyter
|
##### Copyright 2020 The OpenFermion Developers
```
```
# Introduction to OpenFermion
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/openfermion/tutorials/intro_to_openfermion"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/OpenFermion/blob/master/docs/tutorials/intro_to_openfermion.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/OpenFermion/blob/master/docs/tutorials/intro_to_openfermion.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/OpenFermion/docs/tutorials/intro_to_openfermion.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
Note: The examples below must be run sequentially within a section.
## Setup
Install the OpenFermion package:
```
try:
import openfermion
except ImportError:
!pip install git+https://github.com/quantumlib/OpenFermion.git@master#egg=openfermion
```
## Initializing the FermionOperator data structure
Fermionic systems are often treated in second quantization where arbitrary operators can be expressed using the fermionic creation and annihilation operators, $a^\dagger_k$ and $a_k$. The fermionic ladder operators play a similar role to their qubit ladder operator counterparts, $\sigma^+_k$ and $\sigma^-_k$ but are distinguished by the canonical fermionic anticommutation relations, $\{a^\dagger_i, a^\dagger_j\} = \{a_i, a_j\} = 0$ and $\{a_i, a_j^\dagger\} = \delta_{ij}$. Any weighted sums of products of these operators are represented with the FermionOperator data structure in OpenFermion. The following are examples of valid FermionOperators:
$$
\begin{align}
& a_1 \nonumber \\
& 1.7 a^\dagger_3 \nonumber \\
&-1.7 \, a^\dagger_3 a_1 \nonumber \\
&(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 \nonumber \\
&(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 - 1.7 \, a^\dagger_3 a_1 \nonumber
\end{align}
$$
The FermionOperator class is contained in $\textrm{ops/_fermion_operator.py}$. In order to support fast addition of FermionOperator instances, the class is implemented as hash table (python dictionary). The keys of the dictionary encode the strings of ladder operators and values of the dictionary store the coefficients. The strings of ladder operators are encoded as a tuple of 2-tuples which we refer to as the "terms tuple". Each ladder operator is represented by a 2-tuple. The first element of the 2-tuple is an int indicating the tensor factor on which the ladder operator acts. The second element of the 2-tuple is Boole: 1 represents raising and 0 represents lowering. For instance, $a^\dagger_8$ is represented in a 2-tuple as $(8, 1)$. Note that indices start at 0 and the identity operator is an empty list. Below we give some examples of operators and their terms tuple:
$$
\begin{align}
I & \mapsto () \nonumber \\
a_1 & \mapsto ((1, 0),) \nonumber \\
a^\dagger_3 & \mapsto ((3, 1),) \nonumber \\
a^\dagger_3 a_1 & \mapsto ((3, 1), (1, 0)) \nonumber \\
a^\dagger_4 a^\dagger_3 a_9 a_1 & \mapsto ((4, 1), (3, 1), (9, 0), (1, 0)) \nonumber
\end{align}
$$
Note that when initializing a single ladder operator one should be careful to add the comma after the inner pair. This is because in python ((1, 2)) = (1, 2) whereas ((1, 2),) = ((1, 2),). The "terms tuple" is usually convenient when one wishes to initialize a term as part of a coded routine. However, the terms tuple is not particularly intuitive. Accordingly, OpenFermion also supports another user-friendly, string notation below. This representation is rendered when calling "print" on a FermionOperator.
$$
\begin{align}
I & \mapsto \textrm{""} \nonumber \\
a_1 & \mapsto \textrm{"1"} \nonumber \\
a^\dagger_3 & \mapsto \textrm{"3^"} \nonumber \\
a^\dagger_3 a_1 & \mapsto \textrm{"3^}\;\textrm{1"} \nonumber \\
a^\dagger_4 a^\dagger_3 a_9 a_1 & \mapsto \textrm{"4^}\;\textrm{3^}\;\textrm{9}\;\textrm{1"} \nonumber
\end{align}
$$
Let's initialize our first term! We do it two different ways below.
```
from openfermion.ops import FermionOperator
my_term = FermionOperator(((3, 1), (1, 0)))
print(my_term)
my_term = FermionOperator('3^ 1')
print(my_term)
```
The preferred way to specify the coefficient in openfermion is to provide an optional coefficient argument. If not provided, the coefficient defaults to 1. In the code below, the first method is preferred. The multiplication in the second method actually creates a copy of the term, which introduces some additional cost. All inplace operands (such as +=) modify classes whereas binary operands such as + create copies. Important caveats are that the empty tuple FermionOperator(()) and the empty string FermionOperator('') initializes identity. The empty initializer FermionOperator() initializes the zero operator.
```
good_way_to_initialize = FermionOperator('3^ 1', -1.7)
print(good_way_to_initialize)
bad_way_to_initialize = -1.7 * FermionOperator('3^ 1')
print(bad_way_to_initialize)
identity = FermionOperator('')
print(identity)
zero_operator = FermionOperator()
print(zero_operator)
```
Note that FermionOperator has only one attribute: .terms. This attribute is the dictionary which stores the term tuples.
```
my_operator = FermionOperator('4^ 1^ 3 9', 1. + 2.j)
print(my_operator)
print(my_operator.terms)
```
## Manipulating the FermionOperator data structure
So far we have explained how to initialize a single FermionOperator such as $-1.7 \, a^\dagger_3 a_1$. However, in general we will want to represent sums of these operators such as $(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 - 1.7 \, a^\dagger_3 a_1$. To do this, just add together two FermionOperators! We demonstrate below.
```
from openfermion.ops import FermionOperator
term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j)
term_2 = FermionOperator('3^ 1', -1.7)
my_operator = term_1 + term_2
print(my_operator)
my_operator = FermionOperator('4^ 3^ 9 1', 1. + 2.j)
term_2 = FermionOperator('3^ 1', -1.7)
my_operator += term_2
print('')
print(my_operator)
```
The print function prints each term in the operator on a different line. Note that the line my_operator = term_1 + term_2 creates a new object, which involves a copy of term_1 and term_2. The second block of code uses the inplace method +=, which is more efficient. This is especially important when trying to construct a very large FermionOperator. FermionOperators also support a wide range of builtins including, str(), repr(), ==, !=, *=, *, /, /=, +, +=, -, -=, - and **. Note that since FermionOperators involve floats, == and != check for (in)equality up to numerical precision. We demonstrate some of these methods below.
```
term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j)
term_2 = FermionOperator('3^ 1', -1.7)
my_operator = term_1 - 33. * term_2
print(my_operator)
my_operator *= 3.17 * (term_2 + term_1) ** 2
print('')
print(my_operator)
print('')
print(term_2 ** 3)
print('')
print(term_1 == 2.*term_1 - term_1)
print(term_1 == my_operator)
```
Additionally, there are a variety of methods that act on the FermionOperator data structure. We demonstrate a small subset of those methods here.
```
from openfermion.utils import commutator, count_qubits, hermitian_conjugated
from openfermion.transforms import normal_ordered
# Get the Hermitian conjugate of a FermionOperator, count its qubit, check if it is normal-ordered.
term_1 = FermionOperator('4^ 3 3^', 1. + 2.j)
print(hermitian_conjugated(term_1))
print(term_1.is_normal_ordered())
print(count_qubits(term_1))
# Normal order the term.
term_2 = normal_ordered(term_1)
print('')
print(term_2)
print(term_2.is_normal_ordered())
# Compute a commutator of the terms.
print('')
print(commutator(term_1, term_2))
```
## The QubitOperator data structure
The QubitOperator data structure is another essential part of openfermion. As the name suggests, QubitOperator is used to store qubit operators in almost exactly the same way that FermionOperator is used to store fermion operators. For instance $X_0 Z_3 Y_4$ is a QubitOperator. The internal representation of this as a terms tuple would be $((0, \textrm{"X"}), (3, \textrm{"Z"}), (4, \textrm{"Y"}))$. Note that one important difference between QubitOperator and FermionOperator is that the terms in QubitOperator are always sorted in order of tensor factor. In some cases, this enables faster manipulation. We initialize some QubitOperators below.
```
from openfermion.ops import QubitOperator
my_first_qubit_operator = QubitOperator('X1 Y2 Z3')
print(my_first_qubit_operator)
print(my_first_qubit_operator.terms)
operator_2 = QubitOperator('X3 Z4', 3.17)
operator_2 -= 77. * my_first_qubit_operator
print('')
print(operator_2)
```
## Jordan-Wigner and Bravyi-Kitaev
openfermion provides functions for mapping FermionOperators to QubitOperators.
```
from openfermion.ops import FermionOperator
from openfermion.transforms import jordan_wigner, bravyi_kitaev
from openfermion.utils import hermitian_conjugated
from openfermion.linalg import eigenspectrum
# Initialize an operator.
fermion_operator = FermionOperator('2^ 0', 3.17)
fermion_operator += hermitian_conjugated(fermion_operator)
print(fermion_operator)
# Transform to qubits under the Jordan-Wigner transformation and print its spectrum.
jw_operator = jordan_wigner(fermion_operator)
print('')
print(jw_operator)
jw_spectrum = eigenspectrum(jw_operator)
print(jw_spectrum)
# Transform to qubits under the Bravyi-Kitaev transformation and print its spectrum.
bk_operator = bravyi_kitaev(fermion_operator)
print('')
print(bk_operator)
bk_spectrum = eigenspectrum(bk_operator)
print(bk_spectrum)
```
We see that despite the different representation, these operators are iso-spectral. We can also apply the Jordan-Wigner transform in reverse to map arbitrary QubitOperators to FermionOperators. Note that we also demonstrate the .compress() method (a method on both FermionOperators and QubitOperators) which removes zero entries.
```
from openfermion.transforms import reverse_jordan_wigner
# Initialize QubitOperator.
my_operator = QubitOperator('X0 Y1 Z2', 88.)
my_operator += QubitOperator('Z1 Z4', 3.17)
print(my_operator)
# Map QubitOperator to a FermionOperator.
mapped_operator = reverse_jordan_wigner(my_operator)
print('')
print(mapped_operator)
# Map the operator back to qubits and make sure it is the same.
back_to_normal = jordan_wigner(mapped_operator)
back_to_normal.compress()
print('')
print(back_to_normal)
```
## Sparse matrices and the Hubbard model
Often, one would like to obtain a sparse matrix representation of an operator which can be analyzed numerically. There is code in both openfermion.transforms and openfermion.utils which facilitates this. The function get_sparse_operator converts either a FermionOperator, a QubitOperator or other more advanced classes such as InteractionOperator to a scipy.sparse.csc matrix. There are numerous functions in openfermion.utils which one can call on the sparse operators such as "get_gap", "get_hartree_fock_state", "get_ground_state", etc. We show this off by computing the ground state energy of the Hubbard model. To do that, we use code from the openfermion.hamiltonians module which constructs lattice models of fermions such as Hubbard models.
```
from openfermion.hamiltonians import fermi_hubbard
from openfermion.linalg import get_sparse_operator, get_ground_state
from openfermion.transforms import jordan_wigner
# Set model.
x_dimension = 2
y_dimension = 2
tunneling = 2.
coulomb = 1.
magnetic_field = 0.5
chemical_potential = 0.25
periodic = 1
spinless = 1
# Get fermion operator.
hubbard_model = fermi_hubbard(
x_dimension, y_dimension, tunneling, coulomb, chemical_potential,
magnetic_field, periodic, spinless)
print(hubbard_model)
# Get qubit operator under Jordan-Wigner.
jw_hamiltonian = jordan_wigner(hubbard_model)
jw_hamiltonian.compress()
print('')
print(jw_hamiltonian)
# Get scipy.sparse.csc representation.
sparse_operator = get_sparse_operator(hubbard_model)
print('')
print(sparse_operator)
print('\nEnergy of the model is {} in units of T and J.'.format(
get_ground_state(sparse_operator)[0]))
```
## Hamiltonians in the plane wave basis
A user can write plugins to openfermion which allow for the use of, e.g., third-party electronic structure package to compute molecular orbitals, Hamiltonians, energies, reduced density matrices, coupled cluster amplitudes, etc using Gaussian basis sets. We may provide scripts which interface between such packages and openfermion in future but do not discuss them in this tutorial.
When using simpler basis sets such as plane waves, these packages are not needed. openfermion comes with code which computes Hamiltonians in the plane wave basis. Note that when using plane waves, one is working with the periodized Coulomb operator, best suited for condensed phase calculations such as studying the electronic structure of a solid. To obtain these Hamiltonians one must choose to study the system without a spin degree of freedom (spinless), one must the specify dimension in which the calculation is performed (n_dimensions, usually 3), one must specify how many plane waves are in each dimension (grid_length) and one must specify the length scale of the plane wave harmonics in each dimension (length_scale) and also the locations and charges of the nuclei. One can generate these models with plane_wave_hamiltonian() found in openfermion.hamiltonians. For simplicity, below we compute the Hamiltonian in the case of zero external charge (corresponding to the uniform electron gas, aka jellium). We also demonstrate that one can transform the plane wave Hamiltonian using a Fourier transform without effecting the spectrum of the operator.
```
from openfermion.hamiltonians import jellium_model
from openfermion.utils import Grid
from openfermion.linalg import eigenspectrum
from openfermion.transforms import jordan_wigner, fourier_transform
# Let's look at a very small model of jellium in 1D.
grid = Grid(dimensions=1, length=3, scale=1.0)
spinless = True
# Get the momentum Hamiltonian.
momentum_hamiltonian = jellium_model(grid, spinless)
momentum_qubit_operator = jordan_wigner(momentum_hamiltonian)
momentum_qubit_operator.compress()
print(momentum_qubit_operator)
# Fourier transform the Hamiltonian to the position basis.
position_hamiltonian = fourier_transform(momentum_hamiltonian, grid, spinless)
position_qubit_operator = jordan_wigner(position_hamiltonian)
position_qubit_operator.compress()
print('')
print (position_qubit_operator)
# Check the spectra to make sure these representations are iso-spectral.
spectral_difference = eigenspectrum(momentum_qubit_operator) - eigenspectrum(position_qubit_operator)
print('')
print(spectral_difference)
```
## Basics of MolecularData class
Data from electronic structure calculations can be saved in an OpenFermion data structure called MolecularData, which makes it easy to access within our library. Often, one would like to analyze a chemical series or look at many different Hamiltonians and sometimes the electronic structure calculations are either expensive to compute or difficult to converge (e.g. one needs to mess around with different types of SCF routines to make things converge). Accordingly, we anticipate that users will want some way to automatically database the results of their electronic structure calculations so that important data (such as the SCF integrals) can be looked up on-the-fly if the user has computed them in the past. OpenFermion supports a data provenance strategy which saves key results of the electronic structure calculation (including pointers to files containing large amounts of data, such as the molecular integrals) in an HDF5 container.
The MolecularData class stores information about molecules. One initializes a MolecularData object by specifying parameters of a molecule such as its geometry, basis, multiplicity, charge and an optional string describing it. One can also initialize MolecularData simply by providing a string giving a filename where a previous MolecularData object was saved in an HDF5 container. One can save a MolecularData instance by calling the class's .save() method. This automatically saves the instance in a data folder specified during OpenFermion installation. The name of the file is generated automatically from the instance attributes and optionally provided description. Alternatively, a filename can also be provided as an optional input if one wishes to manually name the file.
When electronic structure calculations are run, the data files for the molecule can be automatically updated. If one wishes to later use that data they either initialize MolecularData with the instance filename or initialize the instance and then later call the .load() method.
Basis functions are provided to initialization using a string such as "6-31g". Geometries can be specified using a simple txt input file (see geometry_from_file function in molecular_data.py) or can be passed using a simple python list format demonstrated below. Atoms are specified using a string for their atomic symbol. Distances should be provided in angstrom. Below we initialize a simple instance of MolecularData without performing any electronic structure calculations.
```
from openfermion.chem import MolecularData
# Set parameters to make a simple molecule.
diatomic_bond_length = .7414
geometry = [('H', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))]
basis = 'sto-3g'
multiplicity = 1
charge = 0
description = str(diatomic_bond_length)
# Make molecule and print out a few interesting facts about it.
molecule = MolecularData(geometry, basis, multiplicity,
charge, description)
print('Molecule has automatically generated name {}'.format(
molecule.name))
print('Information about this molecule would be saved at:\n{}\n'.format(
molecule.filename))
print('This molecule has {} atoms and {} electrons.'.format(
molecule.n_atoms, molecule.n_electrons))
for atom, atomic_number in zip(molecule.atoms, molecule.protons):
print('Contains {} atom, which has {} protons.'.format(
atom, atomic_number))
```
If we had previously computed this molecule using an electronic structure package, we can call molecule.load() to populate all sorts of interesting fields in the data structure. Though we make no assumptions about what electronic structure packages users might install, we assume that the calculations are saved in OpenFermion's MolecularData objects. Currently plugins are available for [Psi4](http://psicode.org/) [(OpenFermion-Psi4)](http://github.com/quantumlib/OpenFermion-Psi4) and [PySCF](https://github.com/sunqm/pyscf) [(OpenFermion-PySCF)](http://github.com/quantumlib/OpenFermion-PySCF), and there may be more in the future. For the purposes of this example, we will load data that ships with OpenFermion to make a plot of the energy surface of hydrogen. Note that helper functions to initialize some interesting chemical benchmarks are found in openfermion.utils.
```
# Set molecule parameters.
basis = 'sto-3g'
multiplicity = 1
bond_length_interval = 0.1
n_points = 25
# Generate molecule at different bond lengths.
hf_energies = []
fci_energies = []
bond_lengths = []
for point in range(3, n_points + 1):
bond_length = bond_length_interval * point
bond_lengths += [bond_length]
description = str(round(bond_length,2))
print(description)
geometry = [('H', (0., 0., 0.)), ('H', (0., 0., bond_length))]
molecule = MolecularData(
geometry, basis, multiplicity, description=description)
# Load data.
molecule.load()
# Print out some results of calculation.
print('\nAt bond length of {} angstrom, molecular hydrogen has:'.format(
bond_length))
print('Hartree-Fock energy of {} Hartree.'.format(molecule.hf_energy))
print('MP2 energy of {} Hartree.'.format(molecule.mp2_energy))
print('FCI energy of {} Hartree.'.format(molecule.fci_energy))
print('Nuclear repulsion energy between protons is {} Hartree.'.format(
molecule.nuclear_repulsion))
for orbital in range(molecule.n_orbitals):
print('Spatial orbital {} has energy of {} Hartree.'.format(
orbital, molecule.orbital_energies[orbital]))
hf_energies += [molecule.hf_energy]
fci_energies += [molecule.fci_energy]
# Plot.
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(0)
plt.plot(bond_lengths, fci_energies, 'x-')
plt.plot(bond_lengths, hf_energies, 'o-')
plt.ylabel('Energy in Hartree')
plt.xlabel('Bond length in angstrom')
plt.show()
```
The geometry data needed to generate MolecularData can also be retreived from the PubChem online database by inputting the molecule's name.
```
from openfermion.chem import geometry_from_pubchem
methane_geometry = geometry_from_pubchem('methane')
print(methane_geometry)
```
## InteractionOperator and InteractionRDM for efficient numerical representations
Fermion Hamiltonians can be expressed as $H = h_0 + \sum_{pq} h_{pq}\, a^\dagger_p a_q + \frac{1}{2} \sum_{pqrs} h_{pqrs} \, a^\dagger_p a^\dagger_q a_r a_s$ where $h_0$ is a constant shift due to the nuclear repulsion and $h_{pq}$ and $h_{pqrs}$ are the famous molecular integrals. Since fermions interact pairwise, their energy is thus a unique function of the one-particle and two-particle reduced density matrices which are expressed in second quantization as $\rho_{pq} = \left \langle p \mid a^\dagger_p a_q \mid q \right \rangle$ and $\rho_{pqrs} = \left \langle pq \mid a^\dagger_p a^\dagger_q a_r a_s \mid rs \right \rangle$, respectively.
Because the RDMs and molecular Hamiltonians are both compactly represented and manipulated as 2- and 4- index tensors, we can represent them in a particularly efficient form using similar data structures. The InteractionOperator data structure can be initialized for a Hamiltonian by passing the constant $h_0$ (or 0), as well as numpy arrays representing $h_{pq}$ (or $\rho_{pq}$) and $h_{pqrs}$ (or $\rho_{pqrs}$). Importantly, InteractionOperators can also be obtained by calling MolecularData.get_molecular_hamiltonian() or by calling the function get_interaction_operator() (found in openfermion.transforms) on a FermionOperator. The InteractionRDM data structure is similar but represents RDMs. For instance, one can get a molecular RDM by calling MolecularData.get_molecular_rdm(). When generating Hamiltonians from the MolecularData class, one can choose to restrict the system to an active space.
These classes inherit from the same base class, PolynomialTensor. This data structure overloads the slice operator [] so that one can get or set the key attributes of the InteractionOperator: $\textrm{.constant}$, $\textrm{.one_body_coefficients}$ and $\textrm{.two_body_coefficients}$ . For instance, InteractionOperator[(p, 1), (q, 1), (r, 0), (s, 0)] would return $h_{pqrs}$ and InteractionRDM would return $\rho_{pqrs}$. Importantly, the class supports fast basis transformations using the method PolynomialTensor.rotate_basis(rotation_matrix).
But perhaps most importantly, one can map the InteractionOperator to any of the other data structures we've described here.
Below, we load MolecularData from a saved calculation of LiH. We then obtain an InteractionOperator representation of this system in an active space. We then map that operator to qubits. We then demonstrate that one can rotate the orbital basis of the InteractionOperator using random angles to obtain a totally different operator that is still iso-spectral.
```
from openfermion.chem import MolecularData
from openfermion.transforms import get_fermion_operator, jordan_wigner
from openfermion.linalg import get_ground_state, get_sparse_operator
import numpy
import scipy
import scipy.linalg
# Load saved file for LiH.
diatomic_bond_length = 1.45
geometry = [('Li', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))]
basis = 'sto-3g'
multiplicity = 1
# Set Hamiltonian parameters.
active_space_start = 1
active_space_stop = 3
# Generate and populate instance of MolecularData.
molecule = MolecularData(geometry, basis, multiplicity, description="1.45")
molecule.load()
# Get the Hamiltonian in an active space.
molecular_hamiltonian = molecule.get_molecular_hamiltonian(
occupied_indices=range(active_space_start),
active_indices=range(active_space_start, active_space_stop))
# Map operator to fermions and qubits.
fermion_hamiltonian = get_fermion_operator(molecular_hamiltonian)
qubit_hamiltonian = jordan_wigner(fermion_hamiltonian)
qubit_hamiltonian.compress()
print('The Jordan-Wigner Hamiltonian in canonical basis follows:\n{}'.format(qubit_hamiltonian))
# Get sparse operator and ground state energy.
sparse_hamiltonian = get_sparse_operator(qubit_hamiltonian)
energy, state = get_ground_state(sparse_hamiltonian)
print('Ground state energy before rotation is {} Hartree.\n'.format(energy))
# Randomly rotate.
n_orbitals = molecular_hamiltonian.n_qubits // 2
n_variables = int(n_orbitals * (n_orbitals - 1) / 2)
numpy.random.seed(1)
random_angles = numpy.pi * (1. - 2. * numpy.random.rand(n_variables))
kappa = numpy.zeros((n_orbitals, n_orbitals))
index = 0
for p in range(n_orbitals):
for q in range(p + 1, n_orbitals):
kappa[p, q] = random_angles[index]
kappa[q, p] = -numpy.conjugate(random_angles[index])
index += 1
# Build the unitary rotation matrix.
difference_matrix = kappa + kappa.transpose()
rotation_matrix = scipy.linalg.expm(kappa)
# Apply the unitary.
molecular_hamiltonian.rotate_basis(rotation_matrix)
# Get qubit Hamiltonian in rotated basis.
qubit_hamiltonian = jordan_wigner(molecular_hamiltonian)
qubit_hamiltonian.compress()
print('The Jordan-Wigner Hamiltonian in rotated basis follows:\n{}'.format(qubit_hamiltonian))
# Get sparse Hamiltonian and energy in rotated basis.
sparse_hamiltonian = get_sparse_operator(qubit_hamiltonian)
energy, state = get_ground_state(sparse_hamiltonian)
print('Ground state energy after rotation is {} Hartree.'.format(energy))
```
## Quadratic Hamiltonians and Slater determinants
The general electronic structure Hamiltonian
$H = h_0 + \sum_{pq} h_{pq}\, a^\dagger_p a_q + \frac{1}{2} \sum_{pqrs} h_{pqrs} \, a^\dagger_p a^\dagger_q a_r a_s$ contains terms that act on up to 4 sites, or
is quartic in the fermionic creation and annihilation operators. However, in many situations
we may fruitfully approximate these Hamiltonians by replacing these quartic terms with
terms that act on at most 2 fermionic sites, or quadratic terms, as in mean-field approximation theory.
These Hamiltonians have a number of
special properties one can exploit for efficient simulation and manipulation of the Hamiltonian, thus
warranting a special data structure. We refer to Hamiltonians which
only contain terms that are quadratic in the fermionic creation and annihilation operators
as quadratic Hamiltonians, and include the general case of non-particle conserving terms as in
a general Bogoliubov transformation. Eigenstates of quadratic Hamiltonians can be prepared
efficiently on both a quantum and classical computer, making them amenable to initial guesses for
many more challenging problems.
A general quadratic Hamiltonian takes the form
$$H = \sum_{p, q} (M_{pq} - \mu \delta_{pq}) a^\dagger_p a_q + \frac{1}{2} \sum_{p, q} (\Delta_{pq} a^\dagger_p a^\dagger_q + \Delta_{pq}^* a_q a_p) + \text{constant},$$
where $M$ is a Hermitian matrix, $\Delta$ is an antisymmetric matrix,
$\delta_{pq}$ is the Kronecker delta symbol, and $\mu$ is a chemical
potential term which we keep separate from $M$ so that we can use it
to adjust the expectation of the total number of particles.
In OpenFermion, quadratic Hamiltonians are conveniently represented and manipulated
using the QuadraticHamiltonian class, which stores $M$, $\Delta$, $\mu$ and the constant. It is specialized to exploit the properties unique to quadratic Hamiltonians. Like InteractionOperator and InteractionRDM, it inherits from the PolynomialTensor class.
The BCS mean-field model of superconductivity is a quadratic Hamiltonian. The following code constructs an instance of this model as a FermionOperator, converts it to a QuadraticHamiltonian, and then computes its ground energy:
```
from openfermion.hamiltonians import mean_field_dwave
from openfermion.transforms import get_quadratic_hamiltonian
# Set model.
x_dimension = 2
y_dimension = 2
tunneling = 2.
sc_gap = 1.
periodic = True
# Get FermionOperator.
mean_field_model = mean_field_dwave(
x_dimension, y_dimension, tunneling, sc_gap, periodic=periodic)
# Convert to QuadraticHamiltonian
quadratic_hamiltonian = get_quadratic_hamiltonian(mean_field_model)
# Compute the ground energy
ground_energy = quadratic_hamiltonian.ground_energy()
print(ground_energy)
```
Any quadratic Hamiltonian may be rewritten in the form
$$H = \sum_p \varepsilon_p b^\dagger_p b_p + \text{constant},$$
where the $b_p$ are new annihilation operators that satisfy the fermionic anticommutation relations, and which are linear combinations of the old creation and annihilation operators. This form of $H$ makes it easy to deduce its eigenvalues; they are sums of subsets of the $\varepsilon_p$, which we call the orbital energies of $H$. The following code computes the orbital energies and the constant:
```
orbital_energies, constant = quadratic_hamiltonian.orbital_energies()
print(orbital_energies)
print()
print(constant)
```
Eigenstates of quadratic hamiltonians are also known as fermionic Gaussian states, and they can be prepared efficiently on a quantum computer. One can use OpenFermion to obtain circuits for preparing these states. The following code obtains the description of a circuit which prepares the ground state (operations that can be performed in parallel are grouped together), along with a description of the starting state to which the circuit should be applied:
```
from openfermion.circuits import gaussian_state_preparation_circuit
circuit_description, start_orbitals = gaussian_state_preparation_circuit(quadratic_hamiltonian)
for parallel_ops in circuit_description:
print(parallel_ops)
print('')
print(start_orbitals)
```
In the circuit description, each elementary operation is either a tuple of the form $(i, j, \theta, \varphi)$, indicating the operation $\exp[i \varphi a_j^\dagger a_j]\exp[\theta (a_i^\dagger a_j - a_j^\dagger a_i)]$, which is a Givens rotation of modes $i$ and $j$, or the string 'pht', indicating the particle-hole transformation on the last fermionic mode, which is the operator $\mathcal{B}$ such that $\mathcal{B} a_N \mathcal{B}^\dagger = a_N^\dagger$ and leaves the rest of the ladder operators unchanged. Operations that can be performed in parallel are grouped together.
In the special case that a quadratic Hamiltonian conserves particle number ($\Delta = 0$), its eigenstates take the form
$$\lvert \Psi_S \rangle = b^\dagger_{1}\cdots b^\dagger_{N_f}\lvert \text{vac} \rangle,\qquad
b^\dagger_{p} = \sum_{k=1}^N Q_{pq}a^\dagger_q,$$
where $Q$ is an $N_f \times N$ matrix with orthonormal rows. These states are also known as Slater determinants. OpenFermion also provides functionality to obtain circuits for preparing Slater determinants starting with the matrix $Q$ as the input.
|
github_jupyter
|
<img src="img/python-logo-notext.svg"
style="display:block;margin:auto;width:10%"/>
<h1 style="text-align:center;">Python: Pandas Data Frames 1</h1>
<h2 style="text-align:center;">Coding Akademie München GmbH</h2>
<br/>
<div style="text-align:center;">Dr. Matthias Hölzl</div>
<div style="text-align:center;">Allaithy Raed</div>
# Data Frames
Data Frames sind die am häufigsten verwendete Datenstruktur von Pandas.
Sie ermöglichen das bequeme Einlesen, Verarbeiten und Speichern von Daten.
Konzeptionell besteht ein Data Frame aus mehreren `Series`-Instanzen, die einen gemeinsamen Index haben.
```
import numpy as np
import pandas as pd
```
## Erzeugen eines Data Frames
### Aus einem NumPy Array
```
def create_data_frame():
rng = np.random.default_rng(42)
array = rng.normal(size=(5, 4), scale=5.0)
index = 'A B C D E'.split()
columns = 'w x y z'.split()
return pd.DataFrame(array, index=index, columns=columns)
df = create_data_frame()
df
type(df)
```
### Aus einer CSV-Datei
```
df_csv = pd.read_csv("example_data.csv")
df_csv
df_csv = pd.read_csv("example_data.csv", index_col=0)
df_csv
```
### Aus einer Excel Datei
```
df_excel = pd.read_excel("excel_data.xlsx", index_col=0)
df_excel
df_excel2 = pd.read_excel("excel_other_sheet.xlsx", index_col=0)
df_excel2
df_excel2 = pd.read_excel("excel_other_sheet.xlsx", index_col=0, sheet_name='Another Sheet')
df_excel2.head()
```
### Andere Formate:
```
pd.read_clipboard
pd.read_html
pd.read_json
pd.read_pickle
pd.read_sql; # Verwendet SQLAlchemy um auf eine Datenbank zuzugreifen
# usw.
```
### Indizes und Operationen
```
df_csv.head()
df_csv.tail()
df = create_data_frame()
df['w']
type(df['w'])
# Sollte nicht verwendet werden...
df.w
df[['w', 'y']]
df.index
df.index.is_monotonic_increasing
df.size
df.ndim
df.shape
```
### Erzeugen, Umbenennen und Löschen von Spalten
```
df = create_data_frame()
df['Summe aus w und y'] = df['w'] + df['y']
df
df.rename(columns={'Summe aus w und y': 'w + y'})
df
df.rename(columns={'Summe aus w und y': 'w + y'}, index={'E': 'Z'}, inplace=True)
df
type(df['y'])
del df['y']
df
df.drop('A')
df
df.drop('B', inplace=True)
df
df.drop('z', axis=1)
df
df.drop('z', axis=1, inplace=True)
df
```
## Auswahl
```
df = create_data_frame()
df
df['w']
# Fehler
# df['A']
df.loc['B']
type(df.loc['B'])
df
df.iloc[1]
df.loc[['A', 'C']]
df.loc[['A', 'C'], ['x', 'y']]
df.loc['B', 'z']
df.iloc[[1, 2], [0, 3]]
df.iloc[0, 0]
```
## Bedingte Auswahl
```
df = create_data_frame()
df
df > 0
df[df > 0]
df['w'] > 0
df[df['w'] > 0]
df[df['w'] > 0][['x', 'y']]
df[(df['w'] > 0) & (df['x'] < 0)]
```
# Information über Data Frames
```
df = pd.DataFrame(array, index=index, columns=columns)
df['txt'] = 'a b c d e'.split()
df.iloc[1, 1] = np.nan
df
df.describe()
df.info()
df.dtypes
```
## Data Frame Index
```
df = create_data_frame()
df['txt'] = 'a b c d e'.split()
df
df.reset_index()
df
df.reset_index(inplace=True)
df
df.rename(columns={'index': 'old_index'}, inplace=True)
df
df.set_index('txt')
df
df.set_index('txt', inplace=True)
df
df.set_index('old_index', inplace=True)
df
df.info()
df.index
df.index.name = None
df
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
from libwallerlab.opticsalgorithms.motiondeblur import blurkernel
```
# Overview
This notebook explores a SNR vs. acquisition time analysis for strobed illumination, stop and stare, and coded illumination acquisition strategies.
First, we determine a relationship between t_frame (frame rate) and t_exposure (exposure time). Then, we relate t_exposure to SNR for each method. These relationships should be smooth but non-linear.
```
# Define constants
ps = 6.5e-3 #mm
mag = 20
ps_eff = ps / mag #um
n_px = np.asarray([2100, 2500])
fov = n_px * ps_eff
motion_axis = 0
motion_velocity_mm_s = 20
motion_acceleration_mm_s_s = 1e4
t_settle = 0.1 #s
t_ro = 0.01 #s
figure_directory = '/Users/zfphil/Desktop/figures/'
!mkdir -p /Users/zfphil/Desktop/figures/
np.random.choice(10)
def genBlurVector_rand(kernel_length, beta=0.5, n_tests=10, metric='dnf'):
'''
This is a helper function for solving for a blur vector in terms of it's condition #
'''
kernel_list = []
n_elements_max = math.floor(beta * kernel_length)
for test in range(n_tests):
indicies = np.random.permutation(kernel_length)
kernel = np.zeros(kernel_length)
kernel[indicies[:n_elements_max]] = 1.0
# indicies = np.arange(kernel_length)
# for index in range(n_elements_max):
# rand_index = np.random.randint(0, high=np.size(indicies)-1, size=1)
# kernel[indicies[rand_index]] = 1.
# indicies = np.delete(indicies, rand_index)
rand_index = np.random.permutation(kernel_length)[n_elements_max]
kernel[rand_index] = beta * kernel_length - np.sum(kernel)
assert beta * kernel_length - np.sum(kernel) <= 1
kernel_list.append(kernel)
if metric == 'cond':
# Determine kernel with best conditioon #
metric_best = 1e10
kernel_best = []
for kernel in kernel_list:
spectra = np.abs(np.fft.fft(kernel))
kappa = np.max(spectra) / np.min(spectra)
if kappa < metric_best:
kernel_best = kernel
metric_best = kappa
else:
# Determine kernel with best conditioon #
metric_best = 1e10
kernel_best = []
for kernel in kernel_list:
dnf = (np.sum(1 / np.abs(scipy.fftpack.fft(kernel)) ** 2))
if dnf < metric_best:
kernel_best = kernel
metric_best = dnf
return (metric_best, kernel_best)
# import math
# def condNumToDnf(cond, blur_length, image_size, beta=0.1):
# dnf = ((blur_length * beta) ** 2 / cond ** 2) * math.sqrt(np.prod(image_size))
# return dnf
# # condNumToDnf(40, 50, (1000,1000))
import scipy
def calcDnfFromKernel(x):
from libwallerlab.utilities.opticstools import Ft, iFt
return (np.sum(1 / np.abs(scipy.fftpack.fft(x)) ** 2))
def getOptimalDnf(kernel_size, beta=0.5, n_tests=100, metric = 'dnf'):
dnf, x = genBlurVector_rand(100, beta=beta, n_tests=n_tests, metric=metric)
return(calcDnfFromKernel(x))
getOptimalDnf(100, n_tests=200, metric='dnf')
def frameRateToExposure(t_frame, acquisition_strategy, motion_velocity_mm_s=10,
motion_acceleration_mm_s_s=1e4, t_readout=0.01, t_settle=0.1,
fov=[1,1], motion_axis=0, ps_eff_mm=6.5e-3/20, beta_coded=0.5,
min_strobe_time_s=10e-6):
if 'strobe' in acquisition_strategy:
t_exp_camera = t_frame - t_readout
v = fov[motion_axis] / t_frame
t_illum_strobe = ps_eff / v
if t_illum_strobe < min_strobe_time_s:
t_exp = 0
else:
t_exp = t_illum_strobe
# No deconvolution here
dnf = 1
elif 'stop_and_stare' in acquisition_strategy:
t_start_stop = motion_velocity_mm_s / motion_acceleration_mm_s_s
d_start_stop = 0.5 * motion_acceleration_mm_s_s * t_start_stop ** 2
t_move = (fov[motion_axis] - d_start_stop) / motion_velocity_mm_s
t_exp_camera = t_frame - t_move - t_start_stop + t_readout
t_exp = t_exp_camera # Illumination is on the whole time
# No deconvolution here
dnf = 1
elif 'code' in acquisition_strategy:
t_exp_camera = t_frame - t_readout
# Determine kernel length
kernel_length = int(np.ceil(t_exp_camera / t_frame * fov[motion_axis] / ps_eff))
kernel_length = max(kernel_length, 1)
if kernel_length == 1:
dnf = 1
else:
# dnf = blurkernel.dnfUpperBound(kernel_length, beta_coded)
dnf = getOptimalDnf(kernel_length, beta=beta_coded, n_tests=10)
t_exp_camera = t_frame - t_readout
v = fov[motion_axis] / t_frame
t_illum_strobe = ps_eff / v
if t_illum_strobe < min_strobe_time_s:
t_exp = 0
else:
t_exp = t_exp_camera * beta_coded
# # assert t_exp > 0
if t_exp <= 0 or t_exp_camera <= 0:
t_exp = 0
return(t_exp, dnf)
frame_time = 0.1
t_strobe, dnf_strobd = frameRateToExposure(frame_time, 'strobe', fov=fov)
snr_strobe = blurkernel.dnf2snr(dnf_strobd, t_strobe*1000)
print("Strobed illumination will have exposure time %.5f seconds and SNR %.5f" % (t_strobe, snr_strobe))
t_sns, dnf_sns = frameRateToExposure(frame_time, 'stop_and_stare', fov=fov)
snr_sns = blurkernel.dnf2snr(dnf_sns, t_sns*1000)
print("Stop-and-stare illumination will have exposure time %.5f seconds and SNR %.5f" % (t_sns, snr_sns))
t_coded, dnf_coded = frameRateToExposure(frame_time, 'code', fov=fov)
snr_coded = blurkernel.dnf2snr(dnf_coded, t_coded*1000)
print("Coded illumination will have exposure time %.5f seconds and SNR %.5f" % (t_coded, snr_coded))
```
## Plot SNR vs Frame Rate
```
frame_rates = np.arange(1,80,0.1)
snr_strobe_list = []
snr_sns_list = []
snr_coded_list_25 = []
snr_coded_list_10 = []
snr_coded_list_50 = []
snr_coded_list_75 = []
snr_coded_list_99 = []
for index, rate in enumerate(frame_rates):
t_frame = 1 / rate
t_strobe, dnf_strobe = frameRateToExposure(t_frame, 'strobe', fov=fov)
snr_strobe_list.append(blurkernel.dnf2snr(dnf_strobe, t_strobe*1000))
t_sns, dnf_sns = frameRateToExposure(t_frame, 'stop_and_stare', fov=fov)
snr_sns_list.append(blurkernel.dnf2snr(dnf_sns, t_sns*1000))
t_coded_10, dnf_coded_10 = frameRateToExposure(t_frame, 'code', fov=fov, beta_coded=0.05)
snr_coded_list_10.append(blurkernel.dnf2snr(dnf_coded_10, t_coded_10*1000))
t_coded_50, dnf_coded_50 = frameRateToExposure(t_frame, 'code', fov=fov, beta_coded=0.5)
snr_coded_list_50.append(blurkernel.dnf2snr(dnf_coded_50, t_coded_50*1000))
# t_coded_75, dnf_coded_75 = frameRateToExposure(t_frame, 'code', fov=fov, beta_coded=0.75)
# snr_coded_list_75.append(blurkernel.dnf2snr(dnf_coded_75, t_coded_75))
t_coded_99, dnf_coded_99 = frameRateToExposure(t_frame, 'code', fov=fov, beta_coded=0.95)
snr_coded_list_99.append(blurkernel.dnf2snr(dnf_coded_99, t_coded_99*1000))
# snr_coded_list.append(0)
# print("Coded illumination will have exposure time %.3f seconds and SNR %.2f" % (t_coded, snr_coded))
# print("Finished rate %d of %d" % (index, len(frame_rates)))
# plt.style.use('seaborn-dark')
jtplot.style()
# plt.style.use('classic')
plt.figure(figsize=(12,8))
plt.semilogy(frame_rates, snr_coded_list_10, 'b-')
plt.semilogy(frame_rates, snr_coded_list_50, 'g-')
plt.semilogy(frame_rates, snr_coded_list_99, 'y')
plt.semilogy(frame_rates, snr_sns_list, 'r-', linewidth=2)
plt.semilogy(frame_rates, snr_strobe_list, 'w-', linewidth=2)
plt.ylim((0.5, 5000))
plt.xlim((0,75))
plt.legend(('Coded, 5% Illuminated', 'Coded, 50% Illuminated', 'Coded, 95% Illuminated', 'Stop-and-Stare', 'Strobed'), fontsize=24)
plt.xlabel('Frame Rate (Hz)', fontsize=28)
plt.ylabel('SNR', fontsize=28)
ax = plt.gca()
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(24)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(24)
plt.grid('on', which='both')
plt.tight_layout()
plt.savefig(figure_directory + 'strobe_sns_coded.png', transparent=True)
# plt.style.use('seaborn-dark')
jtplot.style()
# plt.style.use('classic')
plt.figure(figsize=(12,8))
plt.semilogy(frame_rates, snr_sns_list, 'r-', linewidth=2)
plt.semilogy(frame_rates, snr_strobe_list, 'w-', linewidth=2)
plt.ylim((0.5, 5000))
plt.xlim((0,75))
plt.legend(('Stop-and-Stare', 'Strobed'), fontsize=24)
plt.xlabel('Frame Rate (Hz)', fontsize=28)
plt.ylabel('SNR', fontsize=28)
ax = plt.gca()
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(24)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(24)
plt.grid('on', which='both')
plt.tight_layout()
plt.savefig(figure_directory + 'strobe_sns.png', transparent=True)
```
# Blur Kernel Optimization
```
data = np.load('single_illums.npz')
kernel_vector = data['kernel_vector']
kernel_random = data['kernel_random']
blur_kernel_map = np.zeros(object_size)
for position_index, position in enumerate(point_list):
blur_kernel_map[position[0], position[1]] = kernel_vector[position_index]
num_frames = iterates.shape[1]
iterates = np.array(result['history']['x']) #.T
print(iterates.shape)
total_its = iterates.shape[1]
interval = total_its / num_frames
#interval=2
#ax = plt.subplot2grid((6, 1), (1, 5))
#ax = plt.subplot2grid((6, 1), (1, 0), colspan=5)
initial_power_spectrum = 0;
blur_operator = W * 0.5*np.sum(kernel_map, 0).astype(np.complex64).reshape(-1)
static_power_spectrum = np.sum(np.abs(wotf.Ft(blur_operator.reshape(image_size))), axis=0)
sigma_min_static = np.amin(static_power_spectrum)
sigma_min_static = np.amax(static_power_spectrum)
# Generate spatial frequency coordintes
ps = 6.5
fov = 2000 * 6.5e-3/20
dk = 1/fov
freqs = np.arange(-len(static_power_spectrum) // 2, len(static_power_spectrum) // 2) * dk
assert len(freqs) == len(static_power_spectrum)
kernel_random = iterates[:,0]
for i in range(num_frames):
illum = iterates[:,int(interval*i)]
blur_operator_illum = W * (kernel_map.T.dot(iterates[:,int(interval*i)])).T.astype(np.complex64).reshape(-1)
power_spectrum = np.sum(np.abs(wotf.Ft(blur_operator_illum.reshape(image_size))), axis=0)
sigma_min = np.amin(power_spectrum)
sigma_max = np.amax(power_spectrum)
condition = sigma_max/sigma_min
if i==0:
initial_power_spectrum = power_spectrum
fig = plt.figure(figsize=(10,5))
ax1 = plt.subplot2grid((8, 1), (0, 0), rowspan=4)
ax2 = plt.subplot2grid((8, 1), (6, 0), rowspan=2)
ax2.step(illum, 'orange', linewidth=3)
ax2.set_ylim([-0.1,1.1])
ax2.set_xlim([0,24])
ax2.set_title('Illumination Pattern', fontsize=24, color='w')
ax1.set_title('Power Spectrum', fontsize=24, color='w')
# ax1.set_xlim([0,127])
# ax1.set_ylim([10,10^4])
# ax2.set_xticklabels([])
ax1.set_ylabel('Energy', color='w')
ax1.set_xlabel('Spatial Frequencey (cycles/mm)', color='w')
ax2.set_ylabel('Intensity', color='w')
ax2.set_xlabel('Position', color='w')
ax2.xaxis.set_ticks_position('none')
ax2.yaxis.set_ticks_position('none')
#ax2.axison = False
ax2.set_yticklabels([0,0,1])
# ax1.semilogy(initial_power_spectrum, '--', color='white')
# ax1.semilogy(static_power_spectrum, '--', color='white')
ax1.semilogy(freqs, sigma_min*np.ones(power_spectrum.size), color='r', linewidth=3)
ax1.semilogy(freqs, sigma_max*np.ones(power_spectrum.size), color='r', linewidth=3)
ax1.semilogy(freqs, power_spectrum, color='blue', linewidth=3)
ax1.set_ylim((10,6000))
# ax1.set_xticklabels([])
#ax1.set_yticklabels([])
#plt.suptitle('iteration '+str(int(interval*i))+',\t$\kappa=$'+str(np.round(condition,3)))
plt.text(0.6,4.7,'iteration '+str(int(interval*i))+', $\kappa=$'+str(np.round(condition,3)),fontsize=15, color='w')
# Set Axis Colors
for ax in [ax1, ax2]:
ax.tick_params(axis='both', which='major', labelsize=14, color='w')
ax.tick_params(axis='both', which='minor', labelsize=14, color='w')
[i.set_color("w") for i in ax.get_xticklabels()]
[i.set_color("w") for i in ax.get_yticklabels()]
plt.savefig("images/power_spectrum_optimization" + str(i) + ".png")
```
|
github_jupyter
|
**Pix-2-Pix Model using TensorFlow and Keras**
A port of pix-2-pix model built using TensorFlow's high level `tf.keras` API.
Note: GPU is required to make this model train quickly. Otherwise it could take hours.
Original : https://www.kaggle.com/vikramtiwari/pix-2-pix-model-using-tensorflow-and-keras/notebook
## Installations
```
requirements = """
tensorflow
drawSvg
matplotlib
numpy
scipy
pillow
#urllib
#skimage
scikit-image
#gzip
#pickle
"""
%store requirements > requirements.txt
!pip install -r requirements.txt
```
## Data Import
```
# !mkdir datasets
# URL="https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets/facade.tar.gz"
# TAR_FILE="./datasets/facade.tar.gz"
# TARGET_DIR="./datasets/facade/"
# !wget -N URL -O TAR_FILE
# !mkdir TARGET_DIR
# !tar -zxvf TAR_FILE -C ./datasets/
# !rm TAR_FILE
#_URL = 'https://drive.google.com/uc?export=download&id=1dnLTTT19YROjpjwZIZpJ1fxAd91cGBJv'
#path_to_zip = tf.keras.utils.get_file('pix2pix.zip', origin=_URL,extract=True)
#PATH = os.path.join(os.path.dirname(path_to_zip), 'pix2pix/')
```
## Imports
```
import os
import datetime
import imageio
import skimage
import scipy #
# from PIL import Image as Img
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from glob import glob
from IPython.display import Image
tf.logging.set_verbosity(tf.logging.ERROR)
datafolderpath = "/content/drive/My Drive/ToDos/Research/MidcurveNN/code/data/"
datasetpath = datafolderpath+ "pix2pix/datasets/pix2pix/"
# # datasetpath = "./"
# Run this cell to mount your Google Drive.
from google.colab import drive
drive.mount('/content/drive')
!ls $datafolderpath
class DataLoader():
def __init__(self, dataset_name, img_res=(256, 256)):
self.dataset_name = dataset_name
self.img_res = img_res
def binarize(self, image):
h, w = image.shape
for i in range(h):
for j in range(w):
if image[i][j] < 195:
image[i][j] = 0
return image
def load_data(self, batch_size=1, is_testing=False):
data_type = "train" if not is_testing else "test"
path = glob(datafolderpath+'%s/datasets/%s/%s/*' % (self.dataset_name, self.dataset_name, data_type))
#path = glob(PATH + '%s/*' % (data_type))
batch_images = np.random.choice(path, size=batch_size)
imgs_A = []
imgs_B = []
for img_path in batch_images:
img = self.imread(img_path)
img = self.binarize(img)
img = np.expand_dims(img, axis=-1)
h, w, _ = img.shape
_w = int(w/2)
img_A, img_B = img[:, :_w, :], img[:, _w:, :]
# img_A = scipy.misc.imresize(img_A, self.img_res)
# img_A = np.array(Img.fromarray(img_A).resize(self.img_res))
#img_A = np.array(skimage.transform.resize(img_A,self.img_res))
# img_B = scipy.misc.imresize(img_B, self.img_res)
# img_B = np.array(Img.fromarray(img_B).resize(self.img_res))
#img_B = np.array(skimage.transform.resize(img_B,self.img_res))
# If training => do random flip
if not is_testing and np.random.random() < 0.5:
img_A = np.fliplr(img_A)
img_B = np.fliplr(img_B)
imgs_A.append(img_A)
imgs_B.append(img_B)
imgs_A = np.array(imgs_A)/127.5 - 1.
imgs_B = np.array(imgs_B)/127.5 - 1.
return imgs_A, imgs_B
def load_batch(self, batch_size=1, is_testing=False):
data_type = "train" if not is_testing else "val"
path = glob(datafolderpath+'%s/datasets/%s/%s/*' % (self.dataset_name, self.dataset_name, data_type))
#path = glob(PATH + '%s/*' % (data_type))
self.n_batches = int(len(path) / batch_size)
for i in range(self.n_batches-1):
batch = path[i*batch_size:(i+1)*batch_size]
imgs_A, imgs_B = [], []
for img in batch:
img = self.imread(img)
img = self.binarize(img)
img = np.expand_dims(img, axis=-1)
h, w, _ = img.shape
half_w = int(w/2)
img_A = img[:, :half_w, :]
img_B = img[:, half_w:, :]
# img_A = scipy.misc.imresize(img_A, self.img_res)
# img_A = np.array(Img.fromarray(img_A).resize(self.img_res))
#img_A = np.array(skimage.transform.resize(img_A,self.img_res))
# img_B = scipy.misc.imresize(img_B, self.img_res)
# img_B = np.array(Img.fromarray(img_B).resize(self.img_res))
#img_B = np.array(skimage.transform.resize(img_B,self.img_res))
if not is_testing and np.random.random() > 0.5:
img_A = np.fliplr(img_A)
img_B = np.fliplr(img_B)
imgs_A.append(img_A)
imgs_B.append(img_B)
imgs_A = np.array(imgs_A)/127.5 - 1.
imgs_B = np.array(imgs_B)/127.5 - 1.
yield imgs_A, imgs_B
def imread(self, path):
return imageio.imread(path).astype(np.float)
class Pix2Pix():
def __init__(self):
# Input shape
self.img_rows = 256
self.img_cols = 256
self.channels = 1
self.img_shape = (self.img_rows, self.img_cols, self.channels)
# Configure data loader
self.dataset_name = 'pix2pix'
self.data_loader = DataLoader(dataset_name=self.dataset_name,
img_res=(self.img_rows, self.img_cols))
# Calculate output shape of D (PatchGAN)
patch = int(self.img_rows / 2**4)
self.disc_patch = (patch, patch, 1)
# Number of filters in the first layer of G and D
self.gf = int(self.img_rows/4) # 64
self.df = int(self.img_rows/4) # 64
optimizer = tf.keras.optimizers.Adam(0.0002, 0.5)
# Build and compile the discriminator
self.discriminator = self.build_discriminator()
self.discriminator.compile(loss='mse',
optimizer=optimizer,
metrics=['accuracy'])
#-------------------------
# Construct Computational
# Graph of Generator
#-------------------------
# Build the generator
self.generator = self.build_generator()
# Input images and their conditioning images
img_A = tf.keras.layers.Input(shape=self.img_shape)
img_B = tf.keras.layers.Input(shape=self.img_shape)
# By conditioning on B generate a fake version of A
#fake_A = self.generator(img_B)
#By conditioning on A generate a fake version of B
fake_B = self.generator(img_A)
# For the combined model we will only train the generator
self.discriminator.trainable = False
# Discriminators determines validity of translated images / condition pairs
#valid = self.discriminator([fake_A, img_B])
valid = self.discriminator([img_A, fake_B])
self.combined = tf.keras.models.Model(inputs=[img_A, img_B], outputs=[valid, fake_B])
self.combined.compile(loss=['mse', 'mae'],
loss_weights=[1, 100],
optimizer=optimizer)
def build_generator(self):
"""U-Net Generator"""
def conv2d(layer_input, filters, f_size=4, bn=True):
"""Layers used during downsampling"""
d = tf.keras.layers.Conv2D(filters, kernel_size=f_size, strides=2, padding='same')(layer_input)
d = tf.keras.layers.LeakyReLU(alpha=0.2)(d)
if bn:
d = tf.keras.layers.BatchNormalization(momentum=0.8)(d)
return d
def deconv2d(layer_input, skip_input, filters, f_size=4, dropout_rate=0):
"""Layers used during upsampling"""
u = tf.keras.layers.UpSampling2D(size=2)(layer_input)
u = tf.keras.layers.Conv2D(filters, kernel_size=f_size, strides=1, padding='same', activation='relu')(u)
if dropout_rate:
u = tf.keras.layers.Dropout(dropout_rate)(u)
u = tf.keras.layers.BatchNormalization(momentum=0.8)(u)
u = tf.keras.layers.Concatenate()([u, skip_input])
return u
# Image input
d0 = tf.keras.layers.Input(shape=self.img_shape)
# Downsampling
d1 = conv2d(d0, self.gf, bn=False)
d2 = conv2d(d1, self.gf*2)
d3 = conv2d(d2, self.gf*4)
d4 = conv2d(d3, self.gf*8)
d5 = conv2d(d4, self.gf*8)
d6 = conv2d(d5, self.gf*8)
d7 = conv2d(d6, self.gf*8)
# Upsampling
u1 = deconv2d(d7, d6, self.gf*8)
u2 = deconv2d(u1, d5, self.gf*8)
u3 = deconv2d(u2, d4, self.gf*8)
u4 = deconv2d(u3, d3, self.gf*4)
u5 = deconv2d(u4, d2, self.gf*2)
u6 = deconv2d(u5, d1, self.gf)
u7 = tf.keras.layers.UpSampling2D(size=2)(u6)
output_img = tf.keras.layers.Conv2D(self.channels, kernel_size=4, strides=1, padding='same', activation='tanh')(u7)
return tf.keras.models.Model(d0, output_img)
def build_discriminator(self):
def d_layer(layer_input, filters, f_size=4, bn=True):
"""Discriminator layer"""
d = tf.keras.layers.Conv2D(filters, kernel_size=f_size, strides=2, padding='same')(layer_input)
d = tf.keras.layers.LeakyReLU(alpha=0.2)(d)
if bn:
d = tf.keras.layers.BatchNormalization(momentum=0.8)(d)
return d
img_A = tf.keras.layers.Input(shape=self.img_shape)
img_B = tf.keras.layers.Input(shape=self.img_shape)
# Concatenate image and conditioning image by channels to produce input
combined_imgs = tf.keras.layers.Concatenate(axis=-1)([img_A, img_B])
d1 = d_layer(combined_imgs, self.df, bn=False)
d2 = d_layer(d1, self.df*2)
d3 = d_layer(d2, self.df*4)
d4 = d_layer(d3, self.df*8)
validity = tf.keras.layers.Conv2D(1, kernel_size=4, strides=1, padding='same')(d4)
return tf.keras.models.Model([img_A, img_B], validity)
def train(self, epochs, batch_size=1, sample_interval=50):
start_time = datetime.datetime.now()
# Adversarial loss ground truths
valid = np.ones((batch_size,) + self.disc_patch)
fake = np.zeros((batch_size,) + self.disc_patch)
for epoch in range(epochs):
for batch_i, (imgs_A, imgs_B) in enumerate(self.data_loader.load_batch(batch_size)):
# ---------------------
# Train Discriminator
# ---------------------
# Condition on B and generate a translated version
#fake_A = self.generator.predict(imgs_B)
#Condition on A and generate a translated version
fake_B = self.generator.predict(imgs_A)
# Train the discriminators (original images = real / generated = Fake)
d_loss_real = self.discriminator.train_on_batch([imgs_A, imgs_B], valid)
d_loss_fake = self.discriminator.train_on_batch([imgs_A, fake_B], fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# -----------------
# Train Generator
# -----------------
# Train the generators
g_loss = self.combined.train_on_batch([imgs_A, imgs_B], [valid, imgs_B])
elapsed_time = datetime.datetime.now() - start_time
# Plot the progress
print ("[Epoch %d/%d] [Batch %d/%d] [D loss: %f, acc: %3d%%] [G loss: %f] time: %s" % (epoch, epochs,
batch_i, self.data_loader.n_batches,
d_loss[0], 100*d_loss[1],
g_loss[0],
elapsed_time))
# If at save interval => save generated image samples
if batch_i % sample_interval == 0:
self.sample_images(epoch, batch_i)
def sample_images(self, epoch, batch_i):
os.makedirs(datafolderpath+'images/%s' % self.dataset_name, exist_ok=True)
r, c = 3, 3
imgs_A, imgs_B = self.data_loader.load_data(batch_size=3, is_testing=True)
fake_B = self.generator.predict(imgs_A)
gen_imgs = np.concatenate([imgs_A, fake_B, imgs_B])
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
titles = ['Condition', 'Generated', 'Original']
fig, axs = plt.subplots(r, c, figsize=(15,15))
cnt = 0
for i in range(r):
for j in range(c):
axs[i,j].imshow(gen_imgs[cnt][:,:,0], cmap='gray')
axs[i, j].set_title(titles[i])
axs[i,j].axis('off')
cnt += 1
fig.savefig(datafolderpath+"images/%s/%d_%d.png" % (self.dataset_name, epoch, batch_i))
plt.close()
gan = Pix2Pix()
# gan.train(epochs=200, batch_size=1, sample_interval=200)
gan.train(epochs=2, batch_size=1, sample_interval=200)
# training logs are hidden in published notebook
```
Let's see how our model performed over time.
```
from PIL import Image as Img
Image('/content/drive/My Drive/ToDos/Research/MidcurveNN/code/data/images/pix2pix/0_0.png')
Img('/content/drive/My Drive/ToDos/Research/MidcurveNN/code/data/images/pix2pix/0_200.png')
```
This is the result of 2 iterations. You can train the model for more than 2 iterations and it will produce better results. Also, try this model with different datasets.
```
```
|
github_jupyter
|
<a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}#1}}} $
$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}#1}}} $
$ \newcommand{\redbit}[1] {\mathbf{{\color{red}#1}}} $
$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}#1}}} $
$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}#1}}} $
<font style="font-size:28px;" align="left"><b> <font color="blue"> Solutions for </font>Reflections </b></font>
<br>
_prepared by Abuzer Yakaryilmaz_
<br><br>
<a id="task1"></a>
<h3> Task 1</h3>
Create a quantum ciruit with 5 qubits.
Apply h-gate (Hadamard operator) to each qubit.
Apply z-gate ($Z$ operator) to randomly picked qubits. (i.e., $ mycircuit.z(qreg[i]) $)
Apply h-gate to each qubit.
Measure each qubit.
Execute your program 1000 times.
Compare the outcomes of the qubits affected by z-gates, and the outcomes of the qubits not affected by z-gates.
Does z-gate change the outcome?
Why?
<h3> Solution </h3>
```
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# import randrange for random choices
from random import randrange
number_of_qubit = 5
# define a quantum register with 5 qubits
q = QuantumRegister(number_of_qubit)
# define a classical register with 5 bits
c = ClassicalRegister(number_of_qubit)
# define our quantum circuit
qc = QuantumCircuit(q,c)
# apply h-gate to all qubits
for i in range(number_of_qubit):
qc.h(q[i])
# apply z-gate to randomly picked qubits
for i in range(number_of_qubit):
if randrange(2) == 0: # the qubit with index i is picked to apply z-gate
qc.z(q[i])
# apply h-gate to all qubits
for i in range(number_of_qubit):
qc.h(q[i])
qc.barrier()
# measure all qubits
qc.measure(q,c)
# draw the circuit
display(qc.draw(output='mpl'))
# execute the circuit 1000 times in the local simulator
job = execute(qc,Aer.get_backend('qasm_simulator'),shots=1000)
counts = job.result().get_counts(qc)
print(counts)
```
<a id="task2"></a>
<h3> Task 2 </h3>
Randomly create a quantum state and multiply it with Hadamard matrix to find its reflection.
Draw both states.
Repeat the task for a few times.
<h3>Solution</h3>
A function for randomly creating a 2-dimensional quantum state:
```
# randomly create a 2-dimensional quantum state
from math import cos, sin, pi
from random import randrange
def random_qstate_by_angle():
angle_degree = randrange(360)
angle_radian = 2*pi*angle_degree/360
return [cos(angle_radian),sin(angle_radian)]
%run quantum.py
draw_qubit()
# line of reflection for Hadamard
from matplotlib.pyplot import arrow
arrow(-1.109,-0.459,2.218,0.918,linestyle='dotted',color='red')
[x1,y1] = random_qstate_by_angle()
print(x1,y1)
sqrttwo=2**0.5
oversqrttwo = 1/sqrttwo
[x2,y2] = [ oversqrttwo*x1 + oversqrttwo*y1 , oversqrttwo*x1 - oversqrttwo*y1 ]
print(x2,y2)
draw_quantum_state(x1,y1,"main")
draw_quantum_state(x2,y2,"ref")
show_plt()
```
<a id="task3"></a>
<h3> Task 3 </h3>
Find the matrix representing the reflection over the line $y=x$.
<i>Hint: Think about the reflections of the points $ \myrvector{0 \\ 1} $, $ \myrvector{-1 \\ 0} $, and $ \myrvector{-\sqrttwo \\ \sqrttwo} $ over the line $y=x$.</i>
Randomly create a quantum state and multiply it with this matrix to find its reflection over the line $y = x$.
Draw both states.
Repeat the task for a few times.
<h3>Solution</h3>
The reflection over the line $y=x$ swaps the first and second amplitudes.
This is the operetor NOT: $ X = \mymatrix{rr}{0 & 1 \\ 1 & 0} $.
A function for randomly creating a 2-dimensional quantum state:
```
# randomly create a 2-dimensional quantum state
from math import cos, sin, pi
from random import randrange
def random_qstate_by_angle():
angle_degree = randrange(360)
angle_radian = 2*pi*angle_degree/360
return [cos(angle_radian),sin(angle_radian)]
```
Reflecting the randomly picked quantum state over the line $y=x$.
```
%run quantum.py
draw_qubit()
# the line y=x
from matplotlib.pyplot import arrow
arrow(-1,-1,2,2,linestyle='dotted',color='red')
[x1,y1] = random_qstate_by_angle()
[x2,y2] = [y1,x1]
draw_quantum_state(x1,y1,"main")
draw_quantum_state(x2,y2,"ref")
show_plt()
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/lucianaribeiro/filmood/blob/master/SentimentDetectionRNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Installing Tensorflow
! pip install --upgrade tensorflow
# Installing Keras
! pip install --upgrade keras
# Install other packages
! pip install --upgrade pip nltk numpy
# Importing the libraries
from keras.datasets import imdb
from keras.preprocessing import sequence
from keras import Sequential
from keras.layers import Embedding, LSTM, Dense, Dropout
from numpy import array
# Disable tensor flow warnings for better view
from tensorflow.python.util import deprecation
deprecation._PRINT_DEPRECATION_WARNINGS = False
# Loading dataset from IMDB
vocabulary_size = 10000
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words = vocabulary_size)
# Inspect a sample review and its label
print('Review')
print(X_train[6])
print('Label')
print(y_train[6])
# Review back to the original words
word2id = imdb.get_word_index()
id2word = {i: word for word, i in word2id.items()}
print('Review with words')
print([id2word.get(i, ' ') for i in X_train[6]])
print('Label')
print(y_train[6])
# Ensure that all sequences in a list have the same length
X_train = sequence.pad_sequences(X_train, maxlen=500)
X_test = sequence.pad_sequences(X_test, maxlen=500)
# Initialising the RNN
regressor=Sequential()
# Adding a first Embedding layer and some Dropout regularization
regressor.add(Embedding(vocabulary_size, 32, input_length=500))
regressor.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularization
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a third LSTM layer and some Dropout regularization
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a fourth LSTM layer and some Dropout regularization
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
# Adding the output layer
regressor.add(Dense(1, activation='sigmoid'))
# Compiling the RNN
regressor.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
X_valid, y_valid = X_train[:64], y_train[:64]
X_train2, y_train2 = X_train[64:], y_train[64:]
regressor.fit(X_train2, y_train2, validation_data=(X_valid, y_valid), batch_size=64, epochs=25)
! pip install --upgrade nltk
import nltk
nltk.download('punkt')
from nltk import word_tokenize
# A value close to 0 means the sentiment was negative and a value close to 1 means its a positive review
word2id = imdb.get_word_index()
test=[]
for word in word_tokenize("this is simply one of the best films ever made"):
test.append(word2id[word])
test=sequence.pad_sequences([test],maxlen=500)
regressor.predict(test)
# A value close to 0 means the sentiment was negative and a value close to 1 means its a positive review
word2id = imdb.get_word_index()
test=[]
for word in word_tokenize( "the script is a real insult to the intelligence of those watching"):
test.append(word2id[word])
test=sequence.pad_sequences([test],maxlen=500)
regressor.predict(test)
```
|
github_jupyter
|
```
!pip install torch torchtext
!git clone https://github.com/neubig/nn4nlp-code.git
from collections import defaultdict
import math
import time
import random
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
N=2 #length of window on each side (so N=2 gives a total window size of 5, as in t-2 t-1 t t+1 t+2)
EMB_SIZE = 128 # The size of the embedding
embeddings_location = "embeddings.txt" #the file to write the word embeddings to
labels_location = "labels.txt" #the file to write the labels to
# We reuse the data reading from the language modeling class
w2i = defaultdict(lambda: len(w2i))
S = w2i["<s>"]
UNK = w2i["<unk>"]
def read_dataset(filename):
with open(filename, "r") as f:
for line in f:
yield [w2i[x] for x in line.strip().split(" ")]
# Read in the data
train = list(read_dataset("nn4nlp-code/data/ptb/train.txt"))
w2i = defaultdict(lambda: UNK, w2i)
dev = list(read_dataset("nn4nlp-code/data/ptb/valid.txt"))
i2w = {v: k for k, v in w2i.items()}
nwords = len(w2i)
with open(labels_location, 'w') as labels_file:
for i in range(nwords):
labels_file.write(i2w[i] + '\n')
class CBOW(nn.Module):
def __init__(self, vocab_size, embed_dim):
super(CBOW, self).__init__()
self.embeddings_bag = nn.EmbeddingBag(vocab_size, embed_dim, mode='sum')
self.fcl = nn.Linear(embed_dim, vocab_size, bias=False)
def forward(self, x):
x = self.embeddings_bag(x.view(1, -1))
return self.fcl(x)
model = CBOW(nwords, EMB_SIZE)
loss_fn = nn.CrossEntropyLoss()
opt = torch.optim.SGD(model.parameters(), lr=0.1)
# Calculate the loss value for the entire sentence
def calc_sent_loss(sent):
#add padding to the sentence equal to the size of the window
#as we need to predict the eos as well, the future window at that point is N past it
padded_sent = [S] * N + sent + [S] * N
# Step through the sentence
all_losses = []
for i in range(N,len(sent)+N):
model.zero_grad()
logits = model(torch.LongTensor(padded_sent[i-N:i] + padded_sent[i+1:i+N+1]))
loss = F.cross_entropy(logits, torch.tensor(padded_sent[i]).view(1))
loss.backward()
opt.step()
all_losses.append(loss.cpu().detach().numpy())
return sum(all_losses)
MAX_LEN = 100
for ITER in range(100):
print("started iter %r" % ITER)
# Perform training
random.shuffle(train)
train_words, train_loss = 0, 0.0
start = time.time()
for sent_id, sent in enumerate(train):
my_loss = calc_sent_loss(sent)
train_loss += my_loss
train_words += len(sent)
# my_loss.backward()
# trainer.update()
if (sent_id+1) % 5000 == 0:
print("--finished %r sentences" % (sent_id+1))
print("iter %r: train loss/word=%.4f, ppl=%.4f, time=%.2fs" % (ITER, train_loss/train_words, math.exp(train_loss/train_words), time.time()-start))
# Evaluate on dev set
dev_words, dev_loss = 0, 0.0
start = time.time()
for sent_id, sent in enumerate(dev):
my_loss = calc_sent_loss(sent)
dev_loss += my_loss
dev_words += len(sent)
# trainer.update()
print("iter %r: dev loss/word=%.4f, ppl=%.4f, time=%.2fs" % (ITER, dev_loss/dev_words, math.exp(dev_loss/dev_words), time.time()-start))
print("saving embedding files")
with open(embeddings_location, 'w') as embeddings_file:
W_w_np = W_w_p.as_array()
for i in range(nwords):
ith_embedding = '\t'.join(map(str, W_w_np[i]))
embeddings_file.write(ith_embedding + '\n')
```
|
github_jupyter
|
# Supplemental Information:
> **"Clonal heterogeneity influences the fate of new adaptive mutations"**
> Ignacio Vázquez-García, Francisco Salinas, Jing Li, Andrej Fischer, Benjamin Barré, Johan Hallin, Anders Bergström, Elisa Alonso-Pérez, Jonas Warringer, Ville Mustonen, Gianni Liti
## Figure 3 (+ Supp. Figs.)
This IPython notebook is provided for reproduction of Figures 2, S3, S4 and S7 of the paper. It can be viewed by copying its URL to nbviewer and it can be run by opening it in binder.
```
# Load external dependencies
from setup import *
# Load internal dependencies
import config,gmm,plot,utils
%load_ext autoreload
%autoreload 2
%matplotlib inline
ids = pd.read_csv(dir_data+'seq/sample_ids_merged_dup.csv')
ids.loc[ids.clone.isnull(),'type'] = 'population'
ids.loc[(ids.clone.notnull()) & (ids.time==0),'type'] = 'ancestral clone'
ids.loc[(ids.clone.notnull()) & (ids.time==32),'type'] = 'evolved clone'
for seq_type, seq_id in ids.groupby('type'):
print('{0} sequencing coverage\nBottom quartile: {1:.2f}x, Top quartile: {2:.2f}x, Min: {3:.2f}x, Max: {4:.2f}x, Median: {5:.2f}x\n'\
.format(seq_type.capitalize(),
seq_id['coverage'].quantile(.25), \
seq_id['coverage'].quantile(.75), \
seq_id['coverage'].min(), \
seq_id['coverage'].max(), \
seq_id['coverage'].median()))
```
## Data import
Top panels - Import subclonal frequency
```
# Load data
seq_st_df = pd.read_csv(dir_data+'seq/subclonality/seq_subclonality.csv', encoding='utf-8')
# Compute cumulative haplotype frequencies for major subclones
seq_st_df['clonal'] = seq_st_df.apply(
lambda x:
x[['subclone A','subclone B','subclone C','subclone D']].fillna(0).sum(),
axis=1
)
# Calculate the remaining bulk fraction
seq_st_df['bulk'] = 1.0 - seq_st_df['clonal']
seq_st_df.head()
```
Middle panels - Import mutation counts
```
# Load data
seq_dn_df = pd.read_csv(dir_data+'seq/de-novo/seq_de_novo_snv_indel.csv', encoding='utf-8', keep_default_na=False)
print(seq_dn_df.shape)
seq_dn_df.head()
```
The tally of SNVs and indels across whole-population genome sequences is:
```
seq_dn_df[(seq_dn_df.clone!='')].groupby(['selection','population','time','variant_type']).size()
seq_dn_df[(seq_dn_df.time==0) & (seq_dn_df.clone!='') & (seq_dn_df.ploidy=='haploid')].groupby(['selection','mutation_type','variant_type']).size()
seq_dn_df[(seq_dn_df.time==32) & (seq_dn_df.clone!='')].groupby(['selection','mutation_type','variant_type']).size()
```
Bottom panels - Import phenotype evolution
```
# Load data
pheno_df = pd.read_csv(dir_data+'pheno/populations/pheno_populations.csv.gz', encoding='utf-8', keep_default_na=False, na_values='NaN')
# Filter out strains used for spatial control
pheno_df = pheno_df[(pheno_df.group == 'ancestral')|\
(pheno_df.group == 'evolved')]
groups_ph = pheno_df.groupby(['group','cross','cross_rep','selection','selection_rep'])
pheno_df = pheno_df[pheno_df.selection_rep != '']
for (ii,((group,cross,cross_rep,selection,selection_rep),g1)) in enumerate(groups_ph):
if group=='evolved':
df = groups_ph.get_group(('ancestral',cross,cross_rep,selection,''))
df.loc[:,'selection_rep'] = df.selection_rep.replace([''],[selection_rep])
df.loc[:,'population'] = df['background']+'_'+df['cross']+'_'+df['cross_rep'].apply(str)+'_'+df['selection']+'_'+df['selection_rep'].apply(str)
pheno_df = pheno_df.append(df)
pheno_df = pheno_df.reset_index(drop=True)
# Set reference as mean phenotype of the ancestral hybrid
def normalize_phenotype(df, param_abs='norm_growth_rate', param_rel='rel_growth_rate'):
df[param_rel] = df[param_abs] - df[df.group=='ancestral'][param_abs].mean()
return df
pheno_df = pheno_df.groupby(['selection','environment','population'], as_index=False).apply(normalize_phenotype, param_abs='norm_growth_rate', param_rel='rel_growth_rate')
pheno_df = pheno_df.groupby(['selection','environment','population'], as_index=False).apply(normalize_phenotype, param_abs='norm_doubling_time', param_rel='rel_doubling_time')
# # Filter out measurement replicates with >5% measurement error
# pheno_df['pct'] = pheno_df.groupby(['selection','environment','population','group','isolate','gene','genotype_long'])['rel_growth_rate']\
# .apply(lambda x: (x-x.mean())/float(x.mean()))
# pheno_df = pheno_df[abs(pheno_df['pct'])<10]
pheno_df.head() # show dataframe header to stdout
```
## Figure 3 - Subclonal heterogeneity
```
param = 'rel_growth_rate'
panels = {
'HU': {
'WAxNA_F12_1_HU_2':0,
'WAxNA_F12_1_HU_3':1,
'WAxNA_F12_2_HU_3':2
},
'RM': {
'WAxNA_F12_1_RM_3':0,
'WAxNA_F12_1_RM_4':1,
'WAxNA_F12_2_RM_2':2
}
}
populations = panels['HU'].keys()+panels['RM'].keys()
groups_st = seq_st_df[seq_st_df.population.isin(populations)]
groups_dn = seq_dn_df[(seq_dn_df.population.isin(populations))& \
(seq_dn_df.clone=='')& \
(seq_dn_df.gene!='non-coding')]
groups_ph = pheno_df[pheno_df.population.isin(populations)& \
np.isfinite(pheno_df[param])] # Take rows where param is finite
groups_st = groups_st.groupby('selection')
groups_dn = groups_dn.groupby('selection')
groups_ph = groups_ph.groupby(['selection','environment'])
for (ii, environment) in enumerate(['HU','RM']):
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(6, 4), sharey='row')
fig.subplots_adjust(left=0.07,bottom=0.07,right=0.85,top=0.95,hspace=0.3,wspace=0.1)
# Set scales
for ax in axes[0]:
ax.set_xlim(0, 32)
ax.set_ylim(0, 1)
for ax in axes[1]:
if environment=='HU':
ax.set_xlim(-0.3, 0.5)
ax.set_ylim(0, 0.15)
elif environment=='RM':
ax.set_xlim(-0.5, 1.9)
ax.set_ylim(0, 0.12)
### Top panels ###
# De novo mutations #
for (jj, (population, gdn)) in enumerate(groups_dn.get_group(environment).groupby('population')):
# Retrieve axes
ax1 = axes[0][panels[environment][population]]
for (gene, cds_pos, sub, protein_pos, amino_acids, consequence), gdx in \
gdn.groupby(['gene','cds_position','substitution','protein_position','amino_acids','consequence_short']):
assignment = gdx.assignment.unique()[0]
mutation_type = gdx.mutation_type.unique()[0]
gdx.time = gdx.time.apply(int)
gdx = gdx.sort_values('time').reset_index(drop=True)
gdx = gdx.sort_index()
ax1.plot(
gdx.index.values, gdx.frequency.values,
color=config.lineages[assignment]['fill'],
**utils.merge_two_dicts(config.mutation_type[mutation_type],
config.consequence_short[consequence])
)
if mutation_type=='driver':
index = np.argmax(gdx.frequency)
ax1.annotate(gene, xy=(index,gdx.frequency[index]), style='italic', fontsize=6,
textcoords='offset points', xytext=(0, 13), ha = 'center', va = 'top',
path_effects=[path_effects.withStroke(linewidth=0.5, foreground="w")], zorder=3)
ax1.annotate(amino_acids.split('/')[0]+protein_pos+amino_acids.split('/')[1],
xy=(index,gdx.frequency[index]), fontsize=5,
textcoords='offset points', xytext=(0, 7), ha = 'center', va = 'top',
path_effects=[path_effects.withStroke(linewidth=0.4, foreground="w")], zorder=3)
# Subclonal frequency #
for (jj, (population,gst)) in enumerate(groups_st.get_group(environment).groupby('population')):
# Retrieve axes
ax2 = axes[0][panels[environment][population]]
# Set title
ax2.set_title(population.replace('_',' '), fontsize=7, weight='bold')
#
gst.set_index('time', inplace=True)
colors=[config.lineages[x]['fill'] for x in ['subclone A','subclone B','bulk']]
gst[['subclone A','subclone B','bulk']].plot(
ax=ax2, kind='bar', legend=False,
stacked=True, rot=0, width=0.75, position=0.5, color=colors
)
# Rotate the x-axis ticks
ax2.set_xlabel('', rotation=0)
### Bottom panels ###
for (jj, (population, gph)) in enumerate(groups_ph.get_group((environment,environment)).groupby('population')):
# Retrieve axes
ax3 = axes[1][panels[environment][population]]
utils.simple_axes(ax3)
for (kk, (time, gt)) in enumerate(gph.groupby('group')):
print(environment, population, time)
x, y = plot.histogram_binned_data(ax, gt[param], bins=34)
ax3.plot(x, y, color=config.population['color'][time], linewidth=0.75)
ax3.fill_between(x, 0, y, label=config.population['long_label'][time],
alpha=0.45, facecolor=config.population['color'][time])
# Mean of all isolates
gt_all = gt.groupby(['isolate','gene','genotype_long','assignment'])
gt_all = gt_all[param].agg(np.mean)#.mean()
# Mean of random isolates
gt_random = gt[(gt['assignment']=='')].groupby(['isolate','gene','genotype_long','assignment'])
gt_random = gt_random[param].agg(np.mean)#.mean()
# Mean of targeted isolates
gt_target = gt[(gt['assignment']!='')].groupby(['isolate','gene','genotype_long','assignment'])
gt_target = gt_target[param].agg(np.mean)#.mean()
# Gaussian mixture model
X = gt_random[:, np.newaxis]
N = np.arange(1, 4)
models = gmm.gmm_fit(X, N)
# Compute the AIC and the BIC
AIC = [m.aic(X) for m in models]
BIC = [m.bic(X) for m in models]
M_best = models[np.argmin(BIC)]
print BIC
# Mean of the distribution
for m, v in zip(abs(M_best.means_.ravel()), M_best.covariances_.ravel()):
print('Mean: %.6f, Variance: %.6f' % (m, v,))
ax3.plot([m,m], ax3.get_ylim(),
color=config.population['color'][time],
linestyle='--', dashes=(4,3), linewidth=1)
pos = ax3.get_ylim()[0] * 0.75 + ax3.get_ylim()[1] * 0.25
trans = ax3.get_xaxis_transform() # x in data units, y in axes fraction
ax3.annotate(
np.around(m, 2), xy=(m, 0.85), xycoords=trans, fontsize=6,
color='k', va='center', ha=('right' if time=='ancestral' else 'left'),
xytext=((-5 if time=='ancestral' else 5),0), textcoords='offset points',
path_effects=[path_effects.withStroke(linewidth=0.5, foreground="w")]
)
x_data = np.array(gt_all)
y_data = np.repeat([0.03*(ax3.get_ylim()[1]-ax3.get_ylim()[0])], len(x_data))
markerline, stemlines, baseline = ax3.stem(x_data, y_data)
plt.setp(markerline, 'markerfacecolor', config.population['color'][time], markersize = 0)
plt.setp(stemlines, linewidth=1, color=config.population['color'][time],
path_effects=[path_effects.withStroke(linewidth=0.75, foreground="w")])
plt.setp(baseline, 'color', 'none')
if len(gt_target)>0:
x_data = np.array(gt_target)
y_data = np.repeat([0.2*(ax3.get_ylim()[1]-ax3.get_ylim()[0])], len(x_data))
markerline, stemlines, baseline = ax3.stem(x_data, y_data)
plt.setp(markerline, 'color', config.population['color'][time],
markersize = 2.75, markeredgewidth=.75, markeredgecolor='k', zorder=3)
plt.setp(stemlines, linewidth=.75, color=config.population['color'][time],
path_effects=[path_effects.withStroke(linewidth=1.25, foreground='k')], zorder=2)
plt.setp(baseline, 'color', 'none', zorder=1)
for (isolate, gene, genotype, assignment), mean in gt_target.iteritems():
ax3.annotate(
gene, xy = (mean, 0.2), xycoords=('data','axes fraction'),
xytext = (0, 8), textcoords = 'offset points',
ha = 'center', va = 'top', fontsize = 6, style = 'italic',
path_effects=[path_effects.withStroke(linewidth=0.5, foreground="w")]
)
# Set axes labels
axes[0, 1].set_xlabel(r'Time, $t$ (days)')
axes[0, 0].set_ylabel('Cumulative subclone\n frequency, $f_j$ (bars)')
axes[0, 2].twinx().set_ylabel('Allele frequency (lines)', rotation=270, va='baseline')
axes[1, 1].set_xlabel(r'Rel. growth rate, $\lambda_{k}(t)$')
axes[1, 0].set_ylabel('Density')
# Set legends
leg1 = axes[0, 2].legend(bbox_to_anchor=(1.3, 0.75), frameon=False,
loc='center left', borderaxespad=0.,
handlelength=0.75, title='Lineage', prop={'size':6})
driver_artist = lines.Line2D((0,1),(0,0), color=config.lineages['bulk']['fill'],
**config.mutation_type['driver'])
passenger_artist = lines.Line2D((0,1),(0,0), color=config.lineages['bulk']['fill'],
**config.mutation_type['passenger'])
nonsyn_artist = lines.Line2D((0,1),(0,0), mfc=config.lineages['bulk']['fill'],
linestyle='', linewidth=1.5,
path_effects=[path_effects.withStroke(linewidth=2, foreground="k")],
**config.consequence_short['non-synonymous'])
syn_artist = lines.Line2D((0,1),(0,0), mfc=config.lineages['bulk']['fill'],
linestyle='', linewidth=1.5,
path_effects=[path_effects.withStroke(linewidth=2, foreground="k")],
**config.consequence_short['synonymous'])
leg2 = axes[0, 2].legend([driver_artist,passenger_artist,nonsyn_artist,syn_artist],
['driver','passenger','non-synonymous','synonymous'],
bbox_to_anchor=(1.3, 0.25), ncol=1,
frameon=False, loc='lower left',
borderaxespad=0, handlelength=1.75,
title='Mutation', prop={'size':6})
axes[0, 2].add_artist(leg1)
axes[0, 2].get_legend().get_title().set_fontsize('7')
leg3 = axes[1, 2].legend(bbox_to_anchor=(1.3, 0.5), frameon=False,
loc='center left', borderaxespad=0., framealpha=1,
handlelength=0.75, title='Time', prop={'size':6})
axes[1, 2].get_legend().get_title().set_fontsize('7')
for leg in [leg1,leg2]:
plt.setp(leg.get_title(), fontsize=7)
# Set panel labels
axes[0,0].text(-0.24, 1.1, chr(2*ii + ord('A')), transform=axes[0,0].transAxes,
fontsize=9, fontweight='bold', va='top', ha='right')
axes[0,1].text(0.5, 1.2, 'Selection: %s' % config.selection['long_label'][environment],
transform=axes[0,1].transAxes, fontsize=8, va='center', ha='center')
axes[1,0].text(-0.24, 1.1, chr(2*ii + ord('B')), transform=axes[1,0].transAxes,
fontsize=9, fontweight='bold', va='top', ha='right')
# Axes limits
for ax in fig.get_axes():
ax.xaxis.label.set_size(6)
ax.yaxis.label.set_size(6)
ax.tick_params(axis='both', which='major', size=2, labelsize=6)
ax.tick_params(axis='both', which='minor', size=0, labelsize=0)
plt.setp(ax.get_xticklabels(), fontsize=6)
plt.setp(ax.get_yticklabels(), fontsize=6)
for loc in ['top','bottom','left','right']:
ax.spines[loc].set_linewidth(0.75)
if ax.is_last_row():
if environment=='HU':
ax.xaxis.set_major_locator(ticker.MaxNLocator(nbins=5))
ax.yaxis.set_major_locator(ticker.MaxNLocator(nbins=5))
elif environment=='RM':
ax.xaxis.set_major_locator(ticker.MaxNLocator(nbins=5))
ax.yaxis.set_major_locator(ticker.MaxNLocator(nbins=4))
plot.save_figure(dir_paper+'figures/figure3/figure3_%s' % environment)
plt.show()
```
**Fig. 3:** Reconstruction of subclonal dynamics. (**A** and **C**), Competing subclones evolved in (*A*) hydroxyurea and (*C*) rapamycin experienced a variety of fates. Time is on the $x$-axis, starting after crossing when the population has no competing subclones. Cumulative haplotype frequency of subclones (bars) and allele frequency of *de novo* mutants (lines) are on the $y$-axis. Most commonly, selective sweeps were observed where a spontaneous mutation arose and increased in frequency. Driver mutations are solid lines and passenger mutations are dashed lines, colored by subclone assignment; circles and squares denote non-synonymous and synonymous mutations, respectively. (**B** and **D**) Variability in intra-population growth rate , estimated by random sampling of 96 individuals at initial ($t = 0$ days, blue) and final ($t = 32$ days, red) time points ($n = 32$ technical replicates per individual). Mean growth rates by individual are shown at the foot of the histogram (Fig. S7). The posterior means of the distribution modes fitted by a Gaussian mixture model are indicated as dashed lines. The fitter individuals (pins) carry driver mutations, measured by targeted sampling and sequencing.
## Figure S3 - Sequence evolution of WAxNA founders
```
panels = {
'HU': {
'WAxNA_F12_1_HU_1':(0,1),
'WAxNA_F12_1_HU_2':(0,2),
'WAxNA_F12_1_HU_3':(0,3),
'WAxNA_F12_2_HU_1':(1,1),
'WAxNA_F12_2_HU_2':(1,2),
'WAxNA_F12_2_HU_3':(1,3)
},
'RM': {
'WAxNA_F2_1_RM_1':(0,0),
'WAxNA_F12_1_RM_1':(0,1),
'WAxNA_F12_1_RM_2':(0,2),
'WAxNA_F12_1_RM_3':(0,3),
'WAxNA_F12_1_RM_4':(0,4),
'WAxNA_F2_1_RM_2':(1,0),
'WAxNA_F12_2_RM_1':(1,1),
'WAxNA_F12_2_RM_2':(1,2),
'WAxNA_F12_2_RM_3':(1,3),
'WAxNA_F12_2_RM_4':(1,4)
}
}
populations = panels['HU'].keys()+panels['RM'].keys()
groups_st = seq_st_df[seq_st_df.population.isin(populations)].groupby(['selection','population'])
groups_dn = seq_dn_df[(seq_dn_df.population.isin(populations))&\
(seq_dn_df.clone=='')&\
(seq_dn_df.gene!='non-coding')].groupby(['selection','population'])
# Create a figure with subplots
fig = plt.figure(figsize=(10, 10))
grid = gridspec.GridSpec(2, 1)
gs = {}
for (ii, e) in enumerate(['HU','RM']):
nrows = 2
ncols = 5
gs[e] = gridspec.GridSpecFromSubplotSpec(nrows, ncols,
subplot_spec=grid[ii],
hspace=0.3, wspace=0.15)
for (jj, p) in enumerate(panels[e]):
# Retrieve axes
ax1 = plt.subplot(gs[e][panels[e][p]])
ax2 = ax1.twinx()
### Subclone frequency ###
gst = groups_st.get_group((e,p))
# Set title
ax1.set_title(p.replace('_',' '), fontsize=7, weight='bold')
# Bar plot
gst = gst.set_index('time')
gst = gst[['subclone A','subclone B','subclone C','subclone D','bulk']]
gst.plot(ax=ax1, kind='bar',
legend=False, stacked=True, width=0.75, position=0.5,
color=[config.lineages[c]['fill'] for c in gst.columns])
### De novo mutations ###
if (e,p) in groups_dn.groups.keys():
gdn = groups_dn.get_group((e,p))
for (gene, pos, cds, sub, protein_pos, amino_acids, consequence), gdx \
in gdn.groupby(['gene','pos','cds_position','substitution',\
'protein_position','amino_acids','consequence_short']):
assignment = gdx.assignment.unique()[0]
mutation_type = gdx.mutation_type.unique()[0]
gdx = gdx.sort_values('time').reset_index(drop=True)
gdx = gdx.sort_index()
ax2.plot(gdx.index.values, gdx.frequency.values,
color=config.lineages[assignment]['line'],
**utils.merge_two_dicts(config.mutation_type[mutation_type],
config.consequence_short[consequence]))
if mutation_type=='driver':
index = np.argmax(gdx.frequency)
ax2.annotate(
gene, xy=(index,gdx.frequency[index]), style='italic', fontsize=6,
textcoords='offset points', xytext=(0, 13), ha = 'center', va = 'top',
path_effects=[path_effects.withStroke(linewidth=0.5, foreground="w")], zorder=3
)
ax2.annotate(
amino_acids.split('/')[0]+protein_pos+amino_acids.split('/')[1],
xy=(index,gdx.frequency[index]), fontsize=5,
textcoords='offset points', xytext=(0, 7), ha = 'center', va = 'top',
path_effects=[path_effects.withStroke(linewidth=0.4, foreground="w")], zorder=3
)
# Set legends
if (e,p) in [('HU','WAxNA_F12_1_HU_3'),('RM','WAxNA_F12_1_RM_4')]:
leg1 = ax1.legend(bbox_to_anchor=(1.3, -0.125), ncol=1,
frameon=False, loc='lower left',
borderaxespad=0., handlelength=0.7,
title='Lineage', prop={'size':6})
if (e,p) in [('HU','WAxNA_F12_2_HU_3'),('RM','WAxNA_F12_2_RM_4')]:
driver_artist = lines.Line2D((0,1),(0,0), color=config.lineages['bulk']['fill'],
**config.mutation_type['driver'])
passenger_artist = lines.Line2D((0,1),(0,0), color=config.lineages['bulk']['fill'],
**config.mutation_type['passenger'])
nonsyn_artist = lines.Line2D((0,1),(0,0), mfc=config.lineages['bulk']['fill'], linestyle='',
path_effects=[path_effects.withStroke(linewidth=2, foreground="k")],
**config.consequence_short['non-synonymous'])
syn_artist = lines.Line2D((0,1),(0,0), mfc=config.lineages['bulk']['fill'], linestyle='',
path_effects=[path_effects.withStroke(linewidth=2, foreground="k")],
**config.consequence_short['synonymous'])
leg2 = ax1.legend([driver_artist,passenger_artist,nonsyn_artist,syn_artist],
['driver','passenger','non-synonymous','synonymous'],
bbox_to_anchor=(1.3, 1.125), ncol=1,
frameon=False, loc='upper left',
borderaxespad=0, handlelength=1.75,
title='Mutation', prop={'size':6})
for leg in [leg1,leg2]:
plt.setp(leg.get_title(),fontsize=6)
# Set axes labels
if (e,p) in [('HU','WAxNA_F12_2_HU_2'),('RM','WAxNA_F12_2_RM_2')]:
ax1.set_xlabel(r'Time, $t$ (days)')
else:
ax1.set_xlabel('')
if (e,p) in [('HU','WAxNA_F12_1_HU_1'),('RM','WAxNA_F2_1_RM_1'),
('HU','WAxNA_F12_2_HU_1'),('RM','WAxNA_F2_1_RM_2')]:
ax1.set_ylabel('Cumulative subclone\n frequency, $f_j$ (bars)')
else:
ax1.set_yticklabels([])
if (e,p) in [('HU','WAxNA_F12_1_HU_3'),('RM','WAxNA_F12_1_RM_4'),
('HU','WAxNA_F12_2_HU_3'),('RM','WAxNA_F12_2_RM_4')]:
ax2.set_ylabel('Allele frequency (lines)', rotation=270, va='baseline')
else:
ax2.set_yticklabels([])
plt.setp(ax1.xaxis.get_majorticklabels(), rotation=0) # rotate the x-axis ticks
# Set panel labels
if (e,p) in [('HU','WAxNA_F12_1_HU_1'),('RM','WAxNA_F2_1_RM_1')]:
ax1.text(-0.25, 1.2, chr(ii + ord('A')), transform=ax1.transAxes,
fontsize=9, fontweight='bold', va='center', ha='right')
if (e,p) in [('HU','WAxNA_F12_1_HU_2'),('RM','WAxNA_F12_1_RM_2')]:
ax1.text(0.5, 1.2, 'Selection: %s' % config.selection['long_label'][e],
transform=ax1.transAxes, fontsize=8, va='center', ha='center')
for ax in fig.get_axes():
ax.set_ylim(0, 1) # axes limits
ax.xaxis.label.set_size(6)
ax.yaxis.label.set_size(6)
ax.tick_params(axis='both', which='major', size=2, labelsize=6)
ax.tick_params(axis='both', which='minor', size=0, labelsize=0)
plt.setp(ax.get_xticklabels(), fontsize=6)
plt.setp(ax.get_yticklabels(), fontsize=6)
for tick in ax.get_xticklabels():
tick.set_visible(True)
for loc in ['top','bottom','left','right']:
ax.spines[loc].set_linewidth(.75)
plot.save_figure(dir_supp+'figures/supp_figure_seq_subclonal_dynamics/supp_figure_seq_subclonal_dynamics_cross')
plt.show()
```
**Fig. S3:** Subclonal dynamics in time for WAxNA founders evolved in (**A**) hydroxyurea and (**B**) rapamycin, measured by whole-population sequencing. Time is on the $x$-axis, starting after crossing when the population has no competing subclones. Cumulative haplotype frequency of subclones (bars) and allele frequency of *de novo* mutants (lines) are on the $y$-axis. Driver mutations are solid lines and passenger mutations are dashed lines, colored by subclone assignment; circles and squares denote non-synonymous and synonymous mutations, respectively. No macroscopic subclones or *de novo* mutations were detected in any of the control replicates in YPD.
## Figure S4 - Sequence evolution of WA, NA founders
```
panels = {
'HU': {
'WA_HU_1':(0,0),
'WA_HU_2':(0,1),
'NA_HU_1':(0,2),
'NA_HU_2':(0,3),
},
'RM': {
'WA_RM_1':(0,0),
'WA_RM_2':(0,1),
'NA_RM_1':(0,2),
'NA_RM_2':(0,3),
}
}
populations = panels['HU'].keys()+panels['RM'].keys()
groups_dn = seq_dn_df[(seq_dn_df.population.isin(populations)) & \
(seq_dn_df.clone=='') & \
(seq_dn_df.gene!='non-coding')].groupby(['selection','population'])
# Get a figure with a lot of subplots
fig = plt.figure(figsize=(8, 5))
grid = gridspec.GridSpec(2, 1, hspace=0.5)
gs = {}
for (ii, e) in enumerate(['HU','RM']):
nrows = 1
ncols = 4
gs[e] = gridspec.GridSpecFromSubplotSpec(nrows, ncols,
subplot_spec=grid[ii],
wspace=0.15)
### De novo mutations ###
for (jj, p) in enumerate(panels[e].keys()):
# Retrieve axes
ax = plt.subplot(gs[e][panels[e][p]])
# Set title
ax.set_title(p.replace('_',' '), fontsize=7, weight='bold')
# Set axes labels
if (e,p) in [('HU','WA_HU_1'),('RM','WA_RM_1')]:
ax.set_ylabel('Allele frequency')
ax.text(-0.15, 1.2, chr(ii + ord('A')), transform=ax.transAxes,
fontsize=9, fontweight='bold', va='center', ha='right')
ax.text(0., 1.2, 'Selection: %s' % config.selection['long_label'][e],
transform=ax.transAxes, fontsize=8, va='center', ha='left')
ax.set_yticklabels([0.0,0.2,0.4,0.6,0.8,1.0])
else:
ax.set_yticklabels([])
ax.set_xlabel(r'Time, $t$ (days)')
# Set legend
if (e,p) in [('HU','NA_HU_2')]:
driver_artist = lines.Line2D((0,1),(0,0), color=config.lineages['bulk']['fill'],
**config.mutation_type['driver'])
passenger_artist = lines.Line2D((0,1),(0,0), color=config.lineages['bulk']['fill'],
**config.mutation_type['passenger'])
nonsyn_artist = lines.Line2D((0,1),(0,0), mfc=config.lineages['bulk']['fill'], linestyle='',
path_effects=[path_effects.withStroke(linewidth=2, foreground="k")],
**config.consequence_short['non-synonymous'])
syn_artist = lines.Line2D((0,1),(0,0), mfc=config.lineages['bulk']['fill'], linestyle='',
path_effects=[path_effects.withStroke(linewidth=2, foreground="k")],
**config.consequence_short['synonymous'])
leg1 = ax.legend([driver_artist,passenger_artist,nonsyn_artist,syn_artist],
['driver','passenger','non-synonymous','synonymous'],
bbox_to_anchor=(1.1, -0.25), ncol=1,
frameon=False, loc='center left',
borderaxespad=0, handlelength=1.75,
title='Mutation', prop={'size':6})
plt.setp(leg1.get_title(),fontsize=6)
# Set empty panels
if (e,p) in groups_dn.groups.keys():
gdn = groups_dn.get_group((e,p))
else:
ax.axvspan(8, 32, facecolor='w', edgecolor='0.5', alpha=0.5, hatch='//')
ax.annotate('Extinct', xy=(16,0.5), fontsize=6, ha='center',
path_effects=[path_effects.withStroke(linewidth=0.5, foreground="w")])
continue
for (gene, cds_pos, sub, protein_pos, amino_acids, consequence), gdx in \
gdn.groupby(['gene','cds_position','substitution','protein_position','amino_acids','consequence_short']):
assignment = gdx.assignment.unique()[0]
mutation_type = gdx.mutation_type.unique()[0]
gdx.time = gdx.time.apply(int)
gdx = gdx.sort_values('time').reset_index(drop=True)
gdx = gdx.sort_index()
gdx = gdx.set_index('time')
ax.plot(gdx.index, gdx.frequency,
color=config.lineages['bulk']['line'],
**utils.merge_two_dicts(config.mutation_type[mutation_type],
config.consequence_short[consequence]))
if mutation_type=='driver':
index = np.argmax(gdx.frequency)
ax.annotate(
gene, xy=(index,gdx.frequency[index]), style='italic', fontsize=6,
textcoords='offset points', xytext=(0, 13), ha = 'center', va = 'top',
path_effects=[path_effects.withStroke(linewidth=0.5, foreground="w")], zorder=3
)
ax.annotate(
amino_acids.split('/')[0]+protein_pos+amino_acids.split('/')[1],
xy=(index,gdx.frequency[index]), fontsize=5,
textcoords='offset points', xytext=(0, 7), ha = 'center', va = 'top',
path_effects=[path_effects.withStroke(linewidth=0.4, foreground="w")], zorder=3
)
for ax in fig.get_axes():
ax.set_xlim(2, 32) # axes limits
ax.set_ylim(0, 1)
ax.xaxis.label.set_size(6)
ax.yaxis.label.set_size(6)
ax.tick_params(axis='both', which='major', size=2, labelsize=6)
ax.tick_params(axis='both', which='minor', size=0, labelsize=0)
plt.setp(ax.get_xticklabels(), fontsize=6)
plt.setp(ax.get_yticklabels(), fontsize=6)
ax.set_xscale('log', base=2)
ax.set_xticks([2, 4, 8, 16, 32])
ax.xaxis.set_major_formatter(ticker.ScalarFormatter())
ax.yaxis.set_major_locator(ticker.MaxNLocator(nbins=5))
for loc in ['top','bottom','left','right']:
ax.spines[loc].set_linewidth(0.75)
plot.save_figure(dir_supp+'figures/supp_figure_seq_subclonal_dynamics/supp_figure_seq_subclonal_dynamics_parents')
plt.show()
```
**Fig. S4:** Subclonal dynamics in time for WA and NA founders evolved in (**A**) hydroxyurea and (**B**) rapamycin, measured by whole-population sequencing. WA founders evolved in hydroxyurea did not survive after 4 days. Driver mutations are solid lines and passenger mutations are dashed lines, colored by subclone assignment; circles and squares denote non-synonymous and synonymous mutations, respectively. No *de novo* mutations were detected in any of the control replicates in YPD.
## Figure S9 - Phenotype evolution
We are inferring the model's components ($F, \lambda_1, \sigma_{\lambda_1}, \lambda_2, \sigma_{\lambda_2}$) using a Gaussian mixture model.
```
param='rel_growth_rate'
scatter_panels = {
'WAxNA_F12_1_HU_2':0,
'WAxNA_F12_1_HU_3':1,
'WAxNA_F12_2_HU_3':2,
'WAxNA_F12_1_RM_3':3,
'WAxNA_F12_1_RM_4':4,
'WAxNA_F12_2_RM_2':5,
}
data = pheno_df[pheno_df.population.isin(scatter_panels.keys())& \
np.isfinite(pheno_df[param])] # Take rows where param is finite
data = pd.pivot_table(
data,
index=['selection','population','group','isolate','gene','genotype_long','assignment'],
columns='environment',
values=param,
aggfunc=np.mean
)
corr = pheno_df[pheno_df.population.isin(scatter_panels.keys())& \
np.isfinite(pheno_df[param])] # Take rows where param is finite
corr = pd.pivot_table(
corr,
index=['isolate','gene','genotype_long','assignment'],
columns=['selection','population','group','environment'],
values=param,
aggfunc=np.mean
)
corr = corr.groupby(level=['selection','population','group'], axis=1, group_keys=False)
corr = corr.apply(lambda x: x.corr(method='spearman'))
corr = corr.query('environment==\'SC\'')
corr = pd.melt(corr).dropna()
corr = corr.pivot_table(columns=['group'], index=['selection','population','environment'], values='value')
fig = plt.figure(figsize=(7.5,5.25))
fig.subplots_adjust(left=0.02, right=0.98, bottom=0.02, top=0.98)
# Make outer gridspec
grid = gridspec.GridSpec(nrows=2, ncols=3, width_ratios=[2, 2, 2], hspace=.5, wspace=.25)
gs = {}
for ii, ((s, p), gp) in enumerate(data.groupby(level=['selection','population'])):
print(s, p)
# Use gridspec to assign different formats to panels in one plot
gs[(s,p)] = gridspec.GridSpecFromSubplotSpec(nrows=2, ncols=2, hspace=.05, wspace=.05,
width_ratios=[4,1], height_ratios=[1,4],
subplot_spec=grid[scatter_panels[p]])
ax = plt.subplot(gs[(s,p)][:])
ax_scatter = plt.subplot(gs[(s,p)][1,0])
ax_x = plt.subplot(gs[(s,p)][0,0])
ax_y = plt.subplot(gs[(s,p)][1,1])
# Define plot ranges at beginning, since used often later
x = gp['SC'].values
y = gp[s].values
if s=='HU':
x_range = [-0.2, 0.45]
y_range = [-0.175, 0.225]
x_count_range = [0, 0.4]
y_count_range = [0, 0.3]
elif s=='RM':
x_range = [-0.4, 1.6]
y_range = [-0.2, 0.19]
x_count_range = [0, 0.4]
y_count_range = [0, 0.2]
# Set title
ax_x.set_title(p.replace('_',' '), fontsize=7, weight='bold')
ax_scatter.annotate(
'Ancestral (t = 0d)\n' r'$\rho$ = {:.2f}'.format(corr.ix[s, p, s]['ancestral']),
xy=(1.25, 1.15), xycoords='axes fraction', fontsize=6,
color=config.population['color']['ancestral'], ha='center', va='bottom'
)
ax_scatter.annotate(
'Evolved (t = 32d)\n' r'$\rho$ = {:.2f}'.format(corr.ix[s, p, s]['evolved']),
xy=(1.25, 1.025), xycoords='axes fraction', fontsize=6,
color=config.population['color']['evolved'], ha='center', va='bottom'
)
ax_scatter.axvline(x=0, ls='--', lw=1.5, color='lightgray', zorder=0)
ax_scatter.axhline(y=0, ls='--', lw=1.5, color='lightgray', zorder=0)
for jj, (t, gt) in enumerate(gp.groupby(level='group')):
gt_all = gt.groupby(level=['isolate','gene','genotype_long','assignment']).agg([np.mean])
gt_random = gt.query('assignment==\'\'').groupby(level=['isolate','gene','genotype_long','assignment']).agg([np.mean])
gt_target = gt.query('assignment!=\'\'').groupby(level=['isolate','gene','genotype_long','assignment']).agg([np.mean])
print gt_target
x_a = gt_all[s]
y_a = gt_all['SC']
x_r = gt_random[s]
y_r = gt_random['SC']
color = config.population['color'][t]
# Scatter plot
plot.scatter_plot(x_r, y_r, ax=ax_scatter, marker='.', color=color, ms=3)
ax_scatter.set_xlim(x_range)
ax_scatter.set_ylim(y_range)
# ax_scatter.annotate(corr.ix[s, p, 'SC'][t],
# xy=(0.95, 0.05), xycoords='axes fraction', fontsize=8,
# color=color, ha='right', va='bottom')
for (isolate, gene, genotype, assignment), data in gt_target.iterrows():
x_t = gt_target[s]
y_t = gt_target['SC']
plot.scatter_plot(x_t, y_t, ax=ax_scatter, marker='o', ms=3, mec='k', mfc=color)
ax_scatter.annotate(
gene, xy = (data[s], data['SC']), xycoords='data',
xytext = (0, 8), textcoords = 'offset points',
ha = 'center', va = 'top',
fontsize = 6, style = 'italic',
path_effects=[path_effects.withStroke(linewidth=0.5, foreground="w")]
)
# x-axis
plot.histogram_x(x_r, ax=ax_x, time=t)
ax_x.set_xlim(x_range)
ax_x.set_ylim(y_count_range)
# Mean of sequenced isolates
# lollipops(x_s, ax_x)
# y-axis
plot.histogram_y(y_r, ax=ax_y, time=t)
ax_y.set_xlim(x_count_range)
ax_y.set_ylim(y_range)
# Set axes labels
ax = plt.subplot(gs[('HU','WAxNA_F12_1_HU_3')][1,0])
ax.set_xlabel('%s\nRel. growth rate, $\lambda_k(t)$' % config.environment['long_label']['HU'])
ax = plt.subplot(gs[('HU','WAxNA_F12_1_HU_2')][1,0])
ax.set_ylabel('Rel. growth rate, $\lambda_k(t)$\n%s' % config.environment['long_label']['SC'])
ax = plt.subplot(gs[('RM','WAxNA_F12_1_RM_4')][1,0])
ax.set_xlabel('%s\nRel. growth rate, $\lambda_k(t)$' % config.environment['long_label']['RM'])
ax = plt.subplot(gs[('RM','WAxNA_F12_1_RM_3')][1,0])
ax.set_ylabel('Rel. growth rate, $\lambda_k(t)$\n%s' % config.environment['long_label']['SC'])
# Set panel labels
ax = plt.subplot(gs[('HU','WAxNA_F12_1_HU_2')][0,0])
ax.text(-.2, 1.75, chr(ord('A')), transform=ax.transAxes,
fontsize=9, fontweight='bold', va='center', ha='right')
ax = plt.subplot(gs[('HU','WAxNA_F12_1_HU_3')][0,0])
ax.text(0.5, 1.75, 'Selection: %s' % config.selection['long_label']['HU'], transform=ax.transAxes,
fontsize=8, va='center', ha='center')
ax = plt.subplot(gs[('RM','WAxNA_F12_1_RM_3')][0,0])
ax.text(-.2, 1.75, chr(ord('B')), transform=ax.transAxes,
fontsize=9, fontweight='bold', va='center', ha='right')
ax = plt.subplot(gs[('RM','WAxNA_F12_1_RM_4')][0,0])
ax.text(0.5, 1.75, 'Selection: %s' % config.selection['long_label']['RM'], transform=ax.transAxes,
fontsize=8, va='center', ha='center')
# Axes limits
for ax in fig.get_axes():
ax.xaxis.label.set_size(6)
ax.yaxis.label.set_size(6)
ax.tick_params(axis='both', which='major', size=2, labelsize=6)
ax.tick_params(axis='both', which='minor', size=0, labelsize=6)
for loc in ['top','bottom','left','right']:
ax.spines[loc].set_linewidth(0.75)
plot.save_figure(dir_supp+'figures/supp_figure_pheno_evolution/supp_figure_pheno_evolution')
plt.show()
```
**Fig. S9:** Variability in intra-population growth rate and fitness correlations. Fitness correlations of ancestral and evolved populations across environments, estimated by random sampling of individuals at initial (0 days, green) and final time points (32 days, purple), before and after selection in (**A**) hydroxyurea and (**B**) rapamycin. The relative growth rate $\lambda_k(t)$ per individual $k$ is shown, calculated by averaging over ${n_r\,{=}\,32}$ technical replicates per individual. Relative growth rates are normalized with respect to the mean population growth rate $\langle\lambda_k\rangle_{t=0}$ at $t=0$ days (see Figures 3B and 3D). The relative growth rates $\lambda_k(t)$ in the stress environment ($x$-axis) are compared to the control environment ($y$-axis). Using a Gaussian mixture model, we found the posterior probability of the mixture modes of the the best-fit mixture (solid lines). The posterior means of the distribution modes are indicated as dashed lines. The fitter individuals carry driver mutations, as determined by targeted sampling and sequencing. Spearman's rank correlation, $\rho$, is shown on the top right of each panel, to assess the association between the growth rate of isolates in the stress and control environments at 0 and 32 days.
|
github_jupyter
|
# The Constellation Wizard requires a STK Scenario to be open
Simply run the cell below and the constelation wizard will appear
```
from tkinter import Tk
from tkinter.ttk import *
from tkinter import W
from tkinter import E
from tkinter import scrolledtext
from tkinter import INSERT
from tkinter import END
from tkinter import IntVar
from tkinter import messagebox
from DeckAccessReaderGUI import *
import numpy as np
import pandas as pd
import os
from os import listdir
from os.path import isfile, join
from shutil import copyfile
from comtypes.client import CreateObject
from comtypes.client import GetActiveObject
from comtypes.gen import STKObjects
# Define window layout
window = Tk()
window.title('Constellation Wizard')
window.geometry('587x510')
cwd = os.getcwd()
cwdFiles = cwd+'\\Files'
window.iconbitmap(cwdFiles+'\\Misc\\'+'ConstellationWizardIcon.ico')
# # Configure Style
Style().theme_use('vista')
# # fonts for all widgets
# window.option_add("*Font", "calabri 9")
######################################### Col0 ########################################################
width = 35
padx = 3
pady = 1
column=0
row = 1
# Connect to STK
try:
root = ConnectToSTK(version=12)
startTime = root.CurrentScenario.QueryInterface(STKObjects.IAgScenario).StartTime
stopTime = root.CurrentScenario.QueryInterface(STKObjects.IAgScenario).StartTime+3600
except:
res = messagebox.askyesno('Constellation Wizard','Failed to connect to a scenario.\nIs a scenario in STK open?')
if res == True:
try:
root = ConnectToSTK(version=12)
startTime = root.CurrentScenario.QueryInterface(STKObjects.IAgScenario).StartTime
stopTime = root.CurrentScenario.QueryInterface(STKObjects.IAgScenario).StartTime+3600
except:
window.quit()
window.destroy()
else:
window.quit()
window.destroy()
def createConMsgBox():
res = messagebox.askyesno('Constellation Wizard',txt.get().replace(' ','-')+'.tce will be created and overwrite any existing file.\nThis may take a while if there are many satellites in the scenario.\nContinue?')
if res == True:
CreateConstellation(root,txt,txtBox,ssc=00000)
btnCreateCon = Button(window,width=width,text='Create Constellation From STK',command=lambda: createConMsgBox())
btnCreateCon.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2)
row += 1
# Load MTO
btnLoadMTO = Button(window,width=width,text='Load Constellation as MTO',command=lambda: LoadMTO(root,txtBox,MTOName = comboCon.get(),timestep=60,color=comboColor.get().lower(),orbitsOnOrOff=onOffStr(),orbitFrame=frameValue()))
btnLoadMTO.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2)
row += 1
# Orbit options
lblFrame = Label(window,text = 'Show Orbits:')
lblFrame.grid(column=column,row=row,padx=padx,pady=pady,sticky=E)
# Checkbox
def onOffStr():
onOff = showOrbits.get()
if onOff == 0:
onOff = 'off'
elif onOff == 1:
onOff = 'on'
return onOff
showOrbits = IntVar()
showOrbits.set(0)
checkButton = Checkbutton(window, variable=showOrbits,offvalue=0,onvalue=1)
checkButton.grid(column=column+1,row=row,padx=padx,pady=pady,sticky=W)
row += 1
row += 1
# Run Deck Access
btnDeckAccess = Button(window,width=width,text='Run Deck Access',command=lambda: runDeckAccess(root,txtStart.get(),txtStop.get(),comboCon,comboDA,txtBox,constraintSatName = comboSat.get()))
btnDeckAccess.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2)
row += 1
# Save Deck Access
def saveDA():
newName = txt.get().replace(' ','-')
res = messagebox.askyesno('Constellation Wizard',newName+'.tce will be created and overwrite any existing file.\nContinue?')
if res == True:
copyfile(cwdFiles+'\\Constellations\\deckAccessTLE.tce', cwdFiles+'\\Constellations\\'+newName+'.tce')
txtBox.insert(END,'Created: '+txt.get().replace(' ','-')+'.tce\n')
btnSave = Button(window,text='Save Deck Access',command=saveDA)
btnSave.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 1,sticky=W+E)
row += 2
# # Load Subset
btnLoadSubset = Button(window,width=width,text='Load Satellites Using Template',command= lambda: LoadSatsFromFileUsingTemplate(root,txtStart.get(),txtStop.get(),comboCon,selected,txtBox,comboSat.get(),color=comboColor.get().lower()))
btnLoadSubset.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2)
row += 2
# Do Analysis
def AddToChain():
addObj = comboChainCov.get()
chainName = comboChain.get()
try:
chain = root.GetObjectFromPath('*/Chain/'+chainName)
chain2 = chain.QueryInterface(STKObjects.IAgChain)
chain2.Objects.Add(addObj)
txtBox.insert(END,'Added: '+addObj.split('/')[-1]+'\n')
except:
txtBox.insert(END,'Failed to Add: '+addObj.split('/')[-1]+'\n')
btnAddChain = Button(window,width=width,text='Add To Chain',command=AddToChain)
btnAddChain.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2)
row += 1
# Do Analysis
def computeChain():
chainName = comboChain.get()
if root.CurrentScenario.Children.Contains(STKObjects.eChain,chainName):
chain = root.GetObjectFromPath('*/Chain/'+chainName)
chain2 = chain.QueryInterface(STKObjects.IAgChain)
chain2.ClearAccess()
chain2.ComputeAccess()
txtBox.insert(END,'Computed: '+chainName+'\n')
else:
txtBox.insert(END,'Failed to Compute: '+chainName+'\n')
btnComputeChain = Button(window,text='Compute Chain',command=computeChain)
btnComputeChain.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 1,sticky=W+E)
def removeAssets():
chainName = comboChain.get()
if root.CurrentScenario.Children.Contains(STKObjects.eChain,chainName):
chain = root.GetObjectFromPath('*/Chain/'+chainName)
chain2 = chain.QueryInterface(STKObjects.IAgChain)
chain2.Objects.RemoveAll()
txtBox.insert(END,'Removed Objects: '+chainName+'\n')
else:
txtBox.insert(END,'Failed to Removed Objects: '+chainName+'\n')
btnRemoveChain = Button(window,text='Remove Objects',command=removeAssets)
btnRemoveChain.grid(column=column+1,row=row,padx=padx,pady=pady,columnspan = 1,sticky=W+E)
row += 1
# Do Analysis
def AddToCoverage():
addObj = comboChainCov.get()
covName = comboCov.get()
if root.CurrentScenario.Children.Contains(STKObjects.eCoverageDefinition,covName):
cov = root.GetObjectFromPath('*/CoverageDefinition/'+covName)
cov2 = cov.QueryInterface(STKObjects.IAgCoverageDefinition)
if cov2.AssetList.CanAssignAsset(addObj):
cov2.AssetList.Add(addObj)
txtBox.insert(END,'Added: '+addObj.split('/')[-1]+'\n')
else:
txtBox.insert(END,'Already Assigned: '+addObj.split('/')[-1]+'\n')
else:
txtBox.insert(END,'Failed to Add: '+addObj.split('/')[-1]+'\n')
btnAddCoverage = Button(window,width=width,text='Add To Coverage',command=AddToCoverage)
btnAddCoverage.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2)
row += 1
# Do Analysis
def computeCov():
covName = comboCov.get()
if root.CurrentScenario.Children.Contains(STKObjects.eCoverageDefinition,covName):
cov = root.GetObjectFromPath('*/CoverageDefinition/'+covName)
cov2 = cov.QueryInterface(STKObjects.IAgCoverageDefinition)
cov2.ClearAccesses()
cov2.ComputeAccesses()
txtBox.insert(END,'Computed: '+covName+'\n')
else:
txtBox.insert(END,'Failed to Compute: '+covName+'\n')
btnComputeCoverage = Button(window,text='Compute Coverage',command=computeCov)
btnComputeCoverage.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 1,sticky=W+E)
def removeAssestsCov():
covName = comboCov.get()
if root.CurrentScenario.Children.Contains(STKObjects.eCoverageDefinition,covName):
cov = root.GetObjectFromPath('*/CoverageDefinition/'+covName)
cov2 = cov.QueryInterface(STKObjects.IAgCoverageDefinition)
cov2.AssetList.RemoveAll()
txtBox.insert(END,'Removed Assets: '+covName+'\n')
else:
txtBox.insert(END,'Failed to Removed Assets: '+covName+'\n')
btnRemoveCov = Button(window,text='Remove Assets',command=removeAssestsCov)
btnRemoveCov.grid(column=column+1,row=row,padx=padx,pady=pady,columnspan = 1,sticky=W+E)
row += 1
row += 3
txtBox = scrolledtext.ScrolledText(window,width=35,height=10)
txtBox.insert(INSERT,'Connected: '+root.CurrentScenario.InstanceName+'\n')
txtBox.grid(column=column,row=row,padx=padx+0,pady=pady,rowspan=4,columnspan = 3,sticky=W+E)
rowTxt = row
######################################### Col2 ########################################################
# Labels
width2 = 30
column = 2
row = 1
lblCreateCon = Label(window,text = 'Create/Save Constellation:')
lblCreateCon.grid(column=column,row=row,padx=padx,pady=pady,sticky=E)
row += 1
lblCon = Label(window,text = 'Constellation:')
lblCon.grid(column=column,row=row,padx=padx,pady=pady,sticky=E)
row += 1
# MTO Options
row += 1
lblColor = Label(window,text = 'MTO/Satellite Color:')
lblColor.grid(column=column,row=row,padx=padx,pady=pady,sticky=E)
row +=1
lblDA = Label(window,text = 'Access From:')
lblDA.grid(column=column,row=row,padx=padx,pady=pady,sticky=E)
row += 1
lblStart = Label(window,text = 'Start Time:')
lblStart.grid(column=column,row=row,padx=padx,pady=pady,sticky=E)
row += 1
lblStop = Label(window,text = 'Stop Time:')
lblStop.grid(column=column,row=row,padx=padx,pady=pady,sticky=E)
row += 1
lblSatTemp = Label(window,text = 'Satellite Template:')
lblSatTemp.grid(column=column,row=row,padx=padx,pady=pady,sticky=E)
row += 2
lblSatTemp = Label(window,text = 'Chain/Coverage Object:')
lblSatTemp.grid(column=column,row=row,padx=padx,pady=pady,sticky=E)
row += 1
lblSatTemp = Label(window,text = 'Chain:')
lblSatTemp.grid(column=column,row=row,padx=padx,pady=pady,sticky=E)
row += 1
lblSatTemp = Label(window,text = 'Coverage:')
lblSatTemp.grid(column=column,row=row,padx=padx,pady=pady,sticky=E)
row += 2
######################################### Col3 ########################################################
column = 3
row=1
# Entry box for Create Constellation
txt = Entry(window,width=width2+3)
txt.delete(0, END)
txt.insert(0, 'NewConstellationName')
txt.grid(column=column, row=row,padx=padx,pady=pady,columnspan=2,sticky=W)
row += 1
# Constellation Options
def updateTCEList():
tceList = [f.split('.')[0] for f in listdir(cwdFiles+'\\Constellations') if (isfile(join(cwdFiles+'\\Constellations', f))) & (f.split('.')[-1]=='tce' )& (f !='deckAccessTLE.tce')]
comboCon['values'] = tceList
comboCon = Combobox(window,width=width2,state='readonly',postcommand = updateTCEList)
updateTCEList()
comboCon.current(0) # set the selected item
comboCon.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2,sticky=W)
row += 1
# Radio Buttons
def frameValue():
frame = selectedFrame.get()
if frame == 0:
frame = 'Inertial'
elif frame == 1:
frame = 'Fixed'
return frame
selectedFrame = IntVar()
selectedFrame.set(0)
radFrame1 = Radiobutton(window,text='Inertial', value=0, variable=selectedFrame)
radFrame2 = Radiobutton(window,text='Fixed', value=1, variable=selectedFrame)
radFrame1.grid(column=column-1,row=row,padx=padx,pady=pady)
radFrame2.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2,sticky=W)
row += 1
# Colors
colorsList = ['Green','Cyan','Blue','Magenta','Red','Yellow','White','Black']
comboColor = Combobox(window,width=width2,state='readonly')
comboColor['values'] = colorsList
comboColor.current(0) # set the selected item
comboColor.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2,sticky=W)
row +=1
# Deck Access Available Objects
def updateAccessList(root):
objs = deckAccessAvailableObjs(root)
for ii in range(len(objs)):
objType = objs[ii].split('/')[-2]
if objType == 'Sensor':
objs[ii] = '/'.join(objs[ii].split('/')[-4:])
else:
objs[ii] = '/'.join(objs[ii].split('/')[-2:])
comboDA['values'] = objs
comboDA = Combobox(window,width=width2,state='readonly',postcommand = lambda: updateAccessList(root))
updateAccessList(root)
try:
comboDA.current(0) # set the selected item
except:
pass
comboDA.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2,sticky=W)
row += 1
# Entry box Times
startTimeUTCG = root.ConversionUtility.ConvertDate('EpSec','UTCG',str(startTime))
txtStart = Entry(window,width=width2+3)
txtStart.delete(0, END)
txtStart.insert(0, startTimeUTCG)
txtStart.grid(column=column,row=row,padx=padx,pady=pady,columnspan=2,sticky=W)
startTime = root.ConversionUtility.ConvertDate('UTCG','EpSec',str(txtStart.get()))
row += 1
stopTimeUTCG = root.ConversionUtility.ConvertDate('EpSec','UTCG',str(stopTime))
txtStop = Entry(window,width=width2+3)
txtStop.delete(0, END)
txtStop.insert(0, stopTimeUTCG)
txtStop.grid(column=column,row=row,padx=padx,pady=pady,columnspan=2,sticky=W)
stopTime = root.ConversionUtility.ConvertDate('UTCG','EpSec',str(txtStop.get()))
row += 1
# Satellite Template
def updateSatList(root):
sats = FilterObjectsByType(root,'Satellite',name = '')
for ii in range(len(sats)):
sats[ii] = sats[ii].split('/')[-1]
sats.insert(0,'')
comboSat['values'] = sats
comboSat = Combobox(window,width=width2,state='readonly',postcommand = lambda: updateSatList(root))
updateSatList(root)
try:
comboSat.current(0) # set the selected item
except:
pass
comboSat.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2,sticky=W)
row += 1
# Radio Buttons
selected = IntVar()
selected.set(1)
rad1 = Radiobutton(window,text='Deck Access Only', value=1, variable=selected)
rad2 = Radiobutton(window,text='Entire Constellation', value=2, variable=selected)
rad1.grid(column=column-1,row=row,padx=padx,pady=pady)
rad2.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2,sticky=W)
row += 1
# Deck Access Available Objects
def updateChainCovList(root):
objs = chainCovAvailableObjs(root)
for ii in range(len(objs)):
objSplit = objs[ii].split('/')
if objSplit[-4] =='Scenario':
objs[ii] = '/'.join(objSplit[-2:])
elif objSplit[-4]=='Sensor':
objs[ii] = '/'.join(objSplit[-6:])
else:
objs[ii] = '/'.join(objSplit[-4:])
comboChainCov['values'] = objs
comboChainCov = Combobox(window,width=width2,state='readonly',postcommand = lambda: updateChainCovList(root))
updateChainCovList(root)
try:
comboChainCov.current(0) # set the selected item
except:
pass
comboChainCov.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2,sticky=W)
row += 1
# Chain Template
def updateChainList(root):
chains = FilterObjectsByType(root,'Chain',name = '')
for ii in range(len(chains)):
chains[ii] = chains[ii].split('/')[-1]
# chains.insert(0,'')
comboChain['values'] = chains
comboChain = Combobox(window,width=width2,state='readonly',postcommand = lambda: updateChainList(root))
updateChainList(root)
try:
comboChain.current(0) # set the selected item
except:
pass
comboChain.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2,sticky=W)
row += 1
# Chain Coverage
def updateCovList(root):
covs = FilterObjectsByType(root,'CoverageDefinition',name = '')
for ii in range(len(covs)):
covs[ii] = covs[ii].split('/')[-1]
# covs.insert(0,'')
comboCov['values'] = covs
comboCov = Combobox(window,width=width2,state='readonly',postcommand = updateCovList)
updateCovList(root)
try:
comboCov.current(0) # set the selected item
except:
pass
comboCov.grid(column=column,row=row,padx=padx,pady=pady,columnspan = 2,sticky=W)
row += 2
# row += 4
# Unload Satellites
btnUnload = Button(window,width=15,text='Unload Satellites',command=lambda: UnloadObjs(root,'Satellite',pattern=txtUnload.get()))
btnUnload.grid(column=3,row=rowTxt+0,padx=padx,pady=pady,sticky=W+E)
txtUnload = Entry(window,width=15)
txtUnload.delete(0, END)
txtUnload.insert(0, 'tle-*')
txtUnload.grid(column=4,row=rowTxt+0,padx=padx,pady=pady,columnspan = 1,sticky=W)
btnUnloadMTO = Button(window,width=15,text='Unload MTOs',command=lambda: UnloadObjs(root,'MTO',pattern=txtUnloadMTO.get()))
btnUnloadMTO.grid(column=3,row=rowTxt+1,padx=padx,pady=pady,sticky=W)
txtUnloadMTO = Entry(window,width=15)
txtUnloadMTO.delete(0, END)
txtUnloadMTO.insert(0, '*')
txtUnloadMTO.grid(column=4,row=rowTxt+1,padx=padx,pady=pady,columnspan = 1,sticky=W)
btnUnloadCon = Button(window,width=15,text='Unload Con.',command=lambda: UnloadObjs(root,'Constellation',pattern=txtUnloadCon.get()))
btnUnloadCon.grid(column=3,row=rowTxt+2,padx=padx,pady=pady,sticky=W)
txtUnloadCon = Entry(window,width=15)
txtUnloadCon.delete(0, END)
txtUnloadCon.insert(0, '*')
txtUnloadCon.grid(column=4,row=rowTxt+2,padx=padx,pady=pady,columnspan = 1,sticky=W)
def clear():
txtBox.delete(1.0,END)
btnClear = Button(window,width=15,text='Clear TextBox',command=clear)
btnClear.grid(column=3,row=rowTxt+3,padx=padx,pady=pady,sticky=W)
# Keep window open
window.mainloop()
```
|
github_jupyter
|
## Using low dimensional embeddings to discover subtypes of breast cancer
This notebook is largely based on https://towardsdatascience.com/reduce-dimensions-for-single-cell-4224778a2d67 (credit to Nikolay Oskolkov).
https://www.nature.com/articles/s41467-018-07582-3#data-availability
```
import pandas as pd
import numpy as np
import GEOparse
from matplotlib import pyplot as plt
GEO_ID = "GSE111229" # from the article
```
#### Exercise 1. load the dataset into `rna_seq` using GEOparse.
```
# %load solutions/ex4_1.py
rna_seq = GEOparse.get_GEO(geo=GEO_ID, destdir="./")
dir(rna_seq)
rna_seq.download_SRA??
rna_seq.geotype
rna_seq.phenotype_data.shape
rna_seq.phenotype_data.shape
rna_seq.to_soft('test', False)
cafs = pd.read_csv('data/CAFs.txt', sep='\t')
sorted(cafs.cluster.unique())
expr = cafs
```
### The expression matrix
716 cells has been sequenced, and the expression levels has been assessed for 558 genes. Arranging the cells as rows and genes as columns we obtain an *expression matrix*.
```
expr.shape
expr
```
Before going further, try to reflect for a moment how you would try to illuminate any pattern in this data, given what you already know.
#### Plot the expression matrix
```
plt.figure(figsize=(8,8))
plt.imshow(expr.values, cmap='Greens', vmax=4000, vmin=0)
plt.title('Expression matrix')
plt.ylabel('Cells')
plt.xlabel('Genes')
plt.colorbar()
plt.show()
```
#### Exercise 2. The data is very sparse (most entries are zero), can you quantify how sparse it is? (i.e. how many of the entries are 0)
```
# %load solutions/ex4_2.py
np.count_nonzero(expr.values) / np.prod(expr.shape)
# only 20% of the entries are non-zero.
print("\n" + "Dimensions of input file: " + str(expr.shape) + "\n")
print("\n" + "Last column corresponds to cluster assignments: " + "\n")
print(expr.iloc[0:4, (expr.shape[1]-4):expr.shape[1]])
X = expr.values[:,0:(expr.shape[1]-1)]
Y = expr.values[:,expr.shape[1]-1] #cluster
X = np.log(X + 1)
cafs.dtypes.unique()
```
### Decomposing the signals
Now that we have gained some basic understanding of the data, we see it is fit for machine learning. You have already seen a few techniques to reduce data dimensionality reduction. We start with PCA
```
from sklearn.decomposition import PCA
#from matplotlib import cm
#dir(cm) # available colors
```
#### Exercise 3. Perform PCA on the expression data and visualize the results (with colors to represent the ground truth clusters)
```
# %load solutions/ex4_3.py
model = PCA()
pca = model.fit_transform(X)
plt.scatter(pca[:, 0], pca[:, 1], c = Y, cmap = 'rainbow', s = 1)
plt.xlabel("PC1", fontsize = 20); plt.ylabel("PC2", fontsize = 20)
plt.plot(model.explained_variance_ratio_[:10])
plt.xticks(range(10));plt.show()
```
PCA is completely unsupervised. Linear discriminant analysis (LDA) is often used for the same purpose as PCA (dimensionality reduction), but is strictly speaking not unsupervised.
```
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
model = LinearDiscriminantAnalysis(n_components = 2, priors = None, shrinkage = 'auto',
solver = 'eigen', store_covariance = False, tol = 0.0001)
lda = model.fit_transform(X, Y)
plt.scatter(lda[:, 0], lda[:, 1], c = Y, cmap = 'viridis', s = 1)
plt.xlabel("LDA1", fontsize = 20); plt.ylabel("LDA2", fontsize = 20)
feature_importances = pd.DataFrame({'Gene':np.array(expr.columns)[:-1],
'Score':abs(model.coef_[0])})
print(feature_importances.sort_values('Score', ascending = False).head(20))
```
The way to interpret the data above: we clearly see the data lay in three clusters, suggesting we have found 3 different separable expression-signatures. However, we also see one cluster is occupied by 2 clusters (the colors are imposed by the fact that we know the "ground truth", but unsupervised methods are generally used for data exploration in which we do not know of these things.
# Non-linear methods
# t-SNE
t-SNE is a very popular decomposition technique used in molecular biology, especially for visualization purposes. t-SNE does generally not cope well with high dimensionality, so it is common to first transform the data with PCA and then run this through t-SNE. Here we will do both with and without prereducing the dimensionality.
```
from sklearn.manifold import TSNE
model = TSNE(learning_rate = 10, n_components = 2, random_state = 123, perplexity = 30)
tsne = model.fit_transform(X)
plt.scatter(tsne[:, 0], tsne[:, 1], c = Y, cmap = 'rainbow', s = 2, marker='x')
plt.title('tSNE', fontsize = 20)
plt.xlabel("tSNE1", fontsize = 20)
plt.ylabel("tSNE2", fontsize = 20)
```
#### Exercise 4. Reduce the data first with PCA to 30 principal components, then rerun the tSNE on this transformed data.
```
# %load solutions/ex4_4.py
X_reduced = PCA(n_components = 30).fit_transform(X)
model = TSNE(learning_rate = 10, n_components = 2, random_state = 123, perplexity = 30)
tsne = model.fit_transform(X_reduced)
plt.scatter(tsne[:, 0], tsne[:, 1], c = Y, cmap = 'rainbow', s = 2, marker='x')
plt.title('tSNE on PCA', fontsize = 20)
plt.xlabel("tSNE1", fontsize = 20)
plt.ylabel("tSNE2", fontsize = 20)
```
While it can be hard to discern the performance boost of prereduction, we can certainly see that t-SNE performs better than a linear method like PCA. However, non-linearity is no guarantee of success itself. For instance Isomap does not do well with this data.
```
from sklearn.manifold import Isomap
model = Isomap()
isomap = model.fit_transform(X)
plt.scatter(isomap[:, 0], isomap[:, 1], c = Y, cmap = 'viridis', s = 1)
plt.title('ISOMAP')
#plt.colorbar()
plt.xlabel("ISO1")
plt.ylabel("ISO2")
```
We should not throw Isomap out the window yet, like most algorithm, there is no one-size-fits-all. Isomap is well suited for tasks without clear clusters, but continuous change is present.
# UMAP
A more recent alternative to t-SNE is [UMAP](https://arxiv.org/abs/1802.03426), which also produces high quality visualizations with good separation, and scales better than t-sne with large datasets.
```
from umap import UMAP
print("Performing Uniform Manifold Approximation and Projection (UMAP) ...")
#model = UMAP(n_neighbors = 30, min_dist = 0.3, n_components = 2)
model = UMAP()
umap = model.fit_transform(X) # or X_reduced
plt.scatter(umap[:, 0], umap[:, 1], c = Y, cmap = 'viridis', s = 1)
plt.title('UMAP')
#plt.colorbar()
plt.xlabel("UMAP1")
plt.ylabel("UMAP2")
```
#### Conclusion
In summary, when doing data exploration of gene expression (and other biomedical data), non-linear methods are preferred to linear ones.
|
github_jupyter
|
```
import hoomd
import hoomd.hpmc
import ex_render
import math
from matplotlib import pyplot
import numpy
%matplotlib inline
```
# Selecting move sizes
HPMC allows you to set the translation and rotation move sizes. Set the move size too small and almost all trial moves are accepted, but it takes many time steps to move the whole system an appreciable amount. Set the move size too large and individual moves will advance the system significantly, but most of the trial moves are rejected.
To find the true optimal move size, you need to define the slowest evolving order parameter in the system. Then perform simulations at many move sizes and find the one with where that order parameter has the fastest decorrelation time.
## Acceptance rule of thumb
In a wide range of systems, the optimal move size is one where the move acceptance ratio is 20%. This rule applies in moderately dense to dense system configurations. HPMC can auto-tune the move size to meet a given acceptance ratio. To demonstrate, here is the hard square tutorial script:
```
hoomd.context.initialize('--mode=cpu');
system = hoomd.init.create_lattice(unitcell=hoomd.lattice.sq(a=1.2), n=10);
mc = hoomd.hpmc.integrate.convex_polygon(d=0.1, a=0.1, seed=1);
square_verts = [[-0.5, -0.5], [0.5, -0.5], [0.5, 0.5], [-0.5, 0.5]];
mc.shape_param.set('A', vertices=square_verts);
log1 = hoomd.analyze.log(filename="log-output.log",
quantities=['hpmc_sweep',
'hpmc_translate_acceptance',
'hpmc_rotate_acceptance',
'hpmc_d',
'hpmc_a',
'hpmc_move_ratio',
'hpmc_overlap_count'],
period=10,
overwrite=True);
```
Activate the tuner and tell it to tune both the **d** and **a** moves.
You can restrict it to only tune one of the move types and provide a range of move sizes the tuner is allowed to choose from. This example sets a maximum translation move size of half the particle width, and a maximum rotation move size that rotates the square all the way to the next symmetric configuration.
```
tuner = hoomd.hpmc.util.tune(obj=mc, tunables=['d', 'a'], max_val=[0.5, 2*math.pi/4], target=0.2);
```
Update the tuner between short runs. It will examine the acceptance ratio and adjust the move sizes to meet the target acceptance ratio.
```
for i in range(20):
hoomd.run(100, quiet=True);
tuner.update();
```
In this example, the acceptance ratios converges after only 10 steps of the tuner.
```
data = numpy.genfromtxt(fname='log-output.log', skip_header=True);
pyplot.figure(figsize=(4,2.2), dpi=140);
pyplot.plot(data[:,0], data[:,2], label='translate acceptance');
pyplot.plot(data[:,0], data[:,4], label='d');
pyplot.xlabel('time step');
pyplot.ylabel('acceptance / move size');
pyplot.legend();
pyplot.figure(figsize=(4,2.2), dpi=140);
pyplot.plot(data[:,0], data[:,3], label='rotate acceptance');
pyplot.plot(data[:,0], data[:,5], label='a');
pyplot.xlabel('time step');
pyplot.ylabel('acceptance / move size');
pyplot.legend(loc='right');
```
## Sampling equilibrium states
Strictly speaking, changing the move size with an tuner **VIOLATES DETAILED BALANCE**. When you make ensemble averages, do not include the period of the simulation where you executed the tuner. This example shows how to make the equilibrium run as a second stage of the script.
```
d = hoomd.dump.gsd("trajectory-square.gsd", period=1000, group=hoomd.group.all(), overwrite=True);
hoomd.run(10000);
```
Examine how the system configuration evolves over time. [ex_render](ex_render.py) is a helper script that builds animated gifs from trajectory files and system snapshots. It is part of the [hoomd-examples](https://github.com/glotzerlab/hoomd-examples) repository and designed only to render these examples.
```
ex_render.display_movie(lambda x: ex_render.render_polygon_frame(x, square_verts), 'trajectory-square.gsd')
```
|
github_jupyter
|
## Prediction sine wave function using Gaussian Process
An example for Gaussian process algorithm to predict sine wave function.
This example is from ["Gaussian Processes regression: basic introductory example"](http://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gp_regression.html).
```
import numpy as np
from sklearn.gaussian_process import GaussianProcess
from matplotlib import pyplot as pl
%matplotlib inline
np.random.seed(1)
# The function to predict
def f(x):
return x*np.sin(x)
# --------------------------
# First the noiseless case
# --------------------------
# Obervations
X = np.atleast_2d([0., 1., 2., 3., 5., 6., 7., 8., 9.5]).T
y = f(X).ravel()
#X = np.atleast_2d(np.linspace(0, 100, 200)).T
# Mesh the input space for evaluations of the real function, the prediction and its MSE
x = np.atleast_2d(np.linspace(0, 10, 1000)).T
# Instanciate a Gaussian Process model
gp = GaussianProcess(corr='cubic', theta0=1e-2, thetaL=1e-4, thetaU=1e-1,
random_start=100)
# Fit to data using Maximum Likelihood Estimation of the parameters
gp.fit(X, y)
# Make the prediction on the meshed x-axis (ask for MSE as well)
y_pred, MSE = gp.predict(x, eval_MSE=True)
sigma = np.sqrt(MSE)
# Plot the function, the prediction and the 95% confidence interval based on the MSE
fig = pl.figure()
pl.plot(x, f(x), 'r:', label=u'$f(x) = x\,\sin(x)$')
pl.plot(X, y, 'r.', markersize=10, label=u'Observations')
pl.plot(x, y_pred, 'b-', label=u'Prediction')
pl.fill(np.concatenate([x, x[::-1]]),
np.concatenate([y_pred - 1.9600 * sigma,
(y_pred + 1.9600 * sigma)[::-1]]),
alpha=.5, fc='b', ec='None', label='95% confidence interval')
pl.xlabel('$x$')
pl.ylabel('$f(x)$')
pl.ylim(-10, 20)
pl.legend(loc='upper left')
# now the noisy case
X = np.linspace(0.1, 9.9, 20)
X = np.atleast_2d(X).T
# Observations and noise
y = f(X).ravel()
dy = 0.5 + 1.0 * np.random.random(y.shape)
noise = np.random.normal(0, dy)
y += noise
# Mesh the input space for evaluations of the real function, the prediction and
# its MSE
x = np.atleast_2d(np.linspace(0, 10, 1000)).T
# Instanciate a Gaussian Process model
gp = GaussianProcess(corr='squared_exponential', theta0=1e-1,
thetaL=1e-3, thetaU=1,
nugget=(dy / y) ** 2,
random_start=100)
# Fit to data using Maximum Likelihood Estimation of the parameters
gp.fit(X, y)
# Make the prediction on the meshed x-axis (ask for MSE as well)
y_pred, MSE = gp.predict(x, eval_MSE=True)
sigma = np.sqrt(MSE)
# Plot the function, the prediction and the 95% confidence interval based on
# the MSE
fig = pl.figure()
pl.plot(x, f(x), 'r:', label=u'$f(x) = x\,\sin(x)$')
pl.errorbar(X.ravel(), y, dy, fmt='r.', markersize=10, label=u'Observations')
pl.plot(x, y_pred, 'b-', label=u'Prediction')
pl.fill(np.concatenate([x, x[::-1]]),
np.concatenate([y_pred - 1.9600 * sigma,
(y_pred + 1.9600 * sigma)[::-1]]),
alpha=.5, fc='b', ec='None', label='95% confidence interval')
pl.xlabel('$x$')
pl.ylabel('$f(x)$')
pl.ylim(-10, 20)
pl.legend(loc='upper left')
pl.show()
```
|
github_jupyter
|
# Convolutional Neural Networks
---
In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.
The images in this database are small color images that fall into one of ten classes; some example images are pictured below.
<img src='notebook_ims/cifar_data.png' width=70% height=70% />
### Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)
Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
```
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
```
---
## Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)
Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data.
#### Augmentation
In this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html).
#### TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.
This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
```
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
```
### Visualize a Batch of Training Data
```
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
```
### View an Image in More Detail
Here, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
```
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
```
---
## Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)
This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:
* [Convolutional layers](https://pytorch.org/docs/stable/nn.html#conv2d), which can be thought of as stack of filtered images.
* [Maxpooling layers](https://pytorch.org/docs/stable/nn.html#maxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.
* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.
A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer.
<img src='notebook_ims/2_layer_conv.png' height=50% width=50% />
#### TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.
The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting.
It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure.
#### Output volume for a convolutional layer
To compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/#layers)):
> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`.
For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
```
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
```
### Specify [Loss Function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)
Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error.
#### TODO: Define the loss and optimizer and see how these choices change the loss over time.
```
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
```
---
## Train the Network
Remember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
```
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
```
### Load the Model with the Lowest Validation Loss
```
model.load_state_dict(torch.load('model_augmented.pt'))
```
---
## Test the Trained Network
Test your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
```
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
```
### Visualize Sample Test Results
```
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
```
|
github_jupyter
|
### Convolutional autoencoder
Since our inputs are images, it makes sense to use convolutional neural networks (convnets) as encoders and decoders. In practical settings, autoencoders applied to images are always convolutional autoencoders --they simply perform much better.
Let's implement one. The encoder will consist in a stack of Conv2D and MaxPooling2D layers (max pooling being used for spatial down-sampling), while the decoder will consist in a stack of Conv2D and UpSampling2D layers.
```
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
import numpy as np
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
input_img = Input(shape=(32, 32, 3)) # adapt this if using `channels_first` image data format
x1 = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x2 = MaxPooling2D((2, 2), padding='same')(x1)
x3 = Conv2D(8, (6, 6), activation='relu', padding='same')(x2)
x4 = MaxPooling2D((2, 2), padding='same')(x3)
x5 = Conv2D(8, (9, 9), activation='relu', padding='same')(x4)
encoded = MaxPooling2D((2, 2), padding='same')(x5)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
x6 = Conv2D(8, (9, 9), activation='relu', padding='same')(encoded)
x7 = UpSampling2D((2, 2))(x6)
x8 = Conv2D(8, (6, 6), activation='relu', padding='same')(x7)
x9 = UpSampling2D((2, 2))(x8)
x10 = Conv2D(16, (3, 3), activation='relu', padding='same')(x9)
x11 = UpSampling2D((2, 2))(x10)
decoded = Conv2D(3, (3, 3), activation='sigmoid', padding='same')(x11)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adagrad', loss='binary_crossentropy')
from keras.datasets import cifar10
import numpy as np
(x_train, _), (x_test, _) = cifar10.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 32, 32, 3)) # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 32, 32, 3)) # adapt this if using `channels_first` image data format
autoencoder.fit(x_train, x_train,
epochs=50,
batch_size=128,
shuffle=True,
validation_data=(x_test, x_test))
from keras.models import load_model
#autoencoder.save('cifar10_autoencoders.h5') # creates a HDF5 file 'my_model.h5'
#del model # deletes the existing model.
# returns a compiled model
# identical to the previous one
autoencoder = load_model('cifar10_autoencoders.h5')
import matplotlib.pyplot as plt
decoded_imgs = autoencoder.predict(x_test)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(32, 32, 3))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(decoded_imgs[i].reshape(32, 32, 3))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
### Plotting the weights from the first layer
```
import matplotlib.pyplot as plt
n = 8
for i in range(n):
fig = plt.figure(figsize=(1,1))
conv_1 = np.asarray(autoencoder.layers[1].get_weights())[0][:,:,0,i]
ax = fig.add_subplot(111)
plt.imshow(conv_1.transpose(), cmap = 'gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
autoencoder.layers[3].get_weights()
from keras import backend as K
# K.learning_phase() is a flag that indicates if the network is in training or
# predict phase. It allow layer (e.g. Dropout) to only be applied during training
inputs = [K.learning_phase()] + autoencoder.inputs
_layer1_f = K.function(inputs, [x2])
def convout1_f(X):
# The [0] is to disable the training phase flag
return _layer1_f([0] + [X])
#_lay_f = K.function(inputs, [x1])
#def convout1_f(X):
# The [0] is to disable the training phase flag
# return _layer1_f([0] + [X])
_layer2_f = K.function(inputs, [x4])
def convout2_f(X):
# The [0] is to disable the training phase flag
return _layer2_f([0] + [X])
_layer3_f = K.function(inputs, [encoded])
def convout3_f(X):
# The [0] is to disable the training phase flag
return _layer3_f([0] + [X])
_up_layer1_f = K.function(inputs, [x6])
def convout4_f(X):
# The [0] is to disable the training phase flag
return _up_layer1_f([0] + [X])
_up_layer2_f = K.function(inputs, [x8])
def convout5_f(X):
# The [0] is to disable the training phase flag
return _up_layer2_f([0] + [X])
_up_layer3_f = K.function(inputs, [x10])
def convout6_f(X):
# The [0] is to disable the training phase flag
return _up_layer3_f([0] + [X])
_up_layer4_f = K.function(inputs, [decoded])
def convout7_f(X):
# The [0] is to disable the training phase flag
return _up_layer4_f([0] + [X])
x2
i = 1
x = x_test[i:i+1]
```
### Visualizing the first convnet/output layer_1 with sample first test image
```
np.squeeze(np.squeeze(np.array(convout1_f(x)),0),0).shape
#Plotting conv_1
for i in range(4):
#i = 3
x = x_test[i:i+1]
check = np.squeeze(np.squeeze(np.array(convout1_f(x)),0),0)
temp = x[0,:,:,:]
fig, axes = plt.subplots(1, 1, figsize=(3, 3))
plt.imshow(temp)
plt.show()
k = 0
while k < check.shape[2]:
#plt.figure()
#plt.subplot(231 + i)
fig, axes = plt.subplots(4, 4, figsize=(5, 5))
for i in range(4):
for j in range(4):
axes[i,j].imshow(check[:,:,k], cmap = 'gray')
k += 1
#axes[0, 0].imshow(R, cmap='jet')
#plt.imshow(check[:,:,i])
plt.show()
check.shape
```
### Visualizing the second convnet/output layer_2 with sample test image
```
i = 3
x = x_test[i:i+1]
check = np.squeeze(np.squeeze(np.array(convout2_f(x)),0),0)
check.shape
#Plotting conv_2
for i in range(4):
#i = 3
x = x_test[i:i+1]
check = np.squeeze(np.squeeze(np.array(convout1_f(x)),0),0)
temp = x[0,:,:,:]
fig, axes = plt.subplots(1, 1, figsize=(3, 3))
plt.imshow(temp)
plt.show()
k = 0
while k < check.shape[2]:
#plt.figure()
#plt.subplot(231 + i)
fig, axes = plt.subplots(2, 4, figsize=(5, 5))
for i in range(2):
for j in range(4):
axes[i,j].imshow(check[:,:,k])
k += 1
#axes[0, 0].imshow(R, cmap='jet')
#plt.imshow(check[:,:,i])
plt.show()
```
### Plotting the third convnet/output layer_3 with sample test image
```
i = 3
x = x_test[i:i+1]
check = np.squeeze(np.squeeze(np.array(convout3_f(x)),0),0)
check.shape
#Plotting conv_3
for i in range(4):
#i = 3
x = x_test[i:i+1]
check = np.squeeze(np.squeeze(np.array(convout1_f(x)),0),0)
temp = x[0,:,:,:]
fig, axes = plt.subplots(1, 1, figsize=(3, 3))
plt.imshow(temp)
plt.show()
k = 0
while k < check.shape[2]:
#plt.figure()
#plt.subplot(231 + i)
fig, axes = plt.subplots(2, 4, figsize=(5, 5))
for i in range(2):
for j in range(4):
axes[i,j].imshow(check[:,:,k])
k += 1
#axes[0, 0].imshow(R, cmap='jet')
#plt.imshow(check[:,:,i])
plt.show()
```
### Visualizing the fourth convnet/decoded/output layer_4 with sample test image
```
i = 3
x = x_test[i:i+1]
check = np.squeeze(np.squeeze(np.array(convout4_f(x)),0),0)
check.shape
#Plotting conv_4
for i in range(4):
#i = 3
x = x_test[i:i+1]
check = np.squeeze(np.squeeze(np.array(convout1_f(x)),0),0)
temp = x[0,:,:,:]
fig, axes = plt.subplots(1, 1, figsize=(3, 3))
plt.imshow(temp)
plt.show()
k = 0
while k < check.shape[2]:
#plt.figure()
#plt.subplot(231 + i)
fig, axes = plt.subplots(2, 4, figsize=(5, 5))
for i in range(2):
for j in range(4):
axes[i,j].imshow(check[:,:,k])
k += 1
#axes[0, 0].imshow(R, cmap='jet')
#plt.imshow(check[:,:,i])
plt.show()
```
### Visualizing the fifth convnet/decoded/output layer_5 with sample test image
```
i = 3
x = x_test[i:i+1]
check = np.squeeze(np.squeeze(np.array(convout5_f(x)),0),0)
check.shape
#Plotting conv_5
for i in range(4):
#i = 3
x = x_test[i:i+1]
check = np.squeeze(np.squeeze(np.array(convout1_f(x)),0),0)
temp = x[0,:,:,:]
fig, axes = plt.subplots(1, 1, figsize=(3, 3))
plt.imshow(temp)
plt.show()
k = 0
while k < check.shape[2]:
#plt.figure()
#plt.subplot(231 + i)
fig, axes = plt.subplots(2, 4, figsize=(5, 5))
for i in range(2):
for j in range(4):
axes[i,j].imshow(check[:,:,k])
k += 1
#axes[0, 0].imshow(R, cmap='jet')
#plt.imshow(check[:,:,i])
plt.show()
```
### Visualizing the sixth convnet/decoded/output layer_6 with sample test image
```
i = 3
x = x_test[i:i+1]
check = np.squeeze(np.squeeze(np.array(convout6_f(x)),0),0)
check.shape
#Plotting conv_6
for i in range(4):
#i = 3
x = x_test[i:i+1]
check = np.squeeze(np.squeeze(np.array(convout1_f(x)),0),0)
temp = x[0,:,:,:]
fig, axes = plt.subplots(1, 1, figsize=(3, 3))
plt.imshow(temp)
plt.show()
k = 0
while k < check.shape[2]:
#plt.figure()
#plt.subplot(231 + i)
fig, axes = plt.subplots(4, 4, figsize=(5, 5))
for i in range(4):
for j in range(4):
axes[i,j].imshow(check[:,:,k])
k += 1
#axes[0, 0].imshow(R, cmap='jet')
#plt.imshow(check[:,:,i])
plt.show()
```
### Visualizing the final decoded/output layer with sample test image
```
i = 1
x = x_test[i:i+1]
check = np.squeeze(np.squeeze(np.array(convout7_f(x)),0),0)
check.shape
#Plot final decoded layer
decoded_imgs = autoencoder.predict(x_test)
n = 4
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(32, 32, 3))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(decoded_imgs[i].reshape(32, 32, 3))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
|
github_jupyter
|
```
import sys
import os
import glob
import subprocess as sp
import multiprocessing as mp
import pandas as pd
import numpy as np
from basic_tools import *
debug=False
def run_ldsc(pheno_code,ld,output,mode='original',samp_prev=np.nan,pop_prev=np.nan):
if os.path.exists(ldsc_path.format(pheno_code)+'.log'):
print("Congratulations!. ldsc result of",pheno_code,"exists. passed.")
return
if mode=='original':
script=['ldsc.py','--h2',sumstats_path.format(pheno_code)+'.sumstats.gz',
'--ref-ld-chr',ld_path.format(ld,''),
'--w-ld-chr',wld_path,
'--out',ldsc_path.format(output)]
elif mode=='my':
script=['ldsc_my.py','--h2',sumstats_path.format(pheno_code)+'.sumstats.gz',
'--ref-ld-chr',ld_path.format(ld,''),
'--w-ld-chr',wld_path,
'--out',ldsc_path.format(output)]
else:
print("run_ldsc mode Error!!!!!!!")
if np.isnan(samp_prev)==False and np.isnan(pop_prev)==False:
script+=['--samp-prev',str(samp_prev),'--pop-prev',str(pop_prev)]
print('Started:',' '.join(script))
sp.call(script)
print('Finished:',' '.join(script))
def run_ldsc_wrapper(prefix,scale,pheno_code,samp_prev=np.nan,pop_prev=np.nan):
run_ldsc(pheno_code,prefix,'{}.{}'.format(prefix,pheno_code),mode='original' if mode=='uni' else 'my',samp_prev=samp_prev,pop_prev=pop_prev)
sys.argv#uni 0 20 x x
mode=sys.argv[1]
scale=int(sys.argv[2])
cores=int(sys.argv[3])
start=int(sys.argv[4])
end=int(sys.argv[5])
if mode=='uni':
prefix=mode
else:
prefix=mode+str(scale)
#start,end,prefix=0,1000,'bp300'
phenotypes_uni_filtered['prevalence']=phenotypes_uni_filtered['n_cases']/phenotypes_uni_filtered['n_non_missing']
phenotypes_uni_filtered.shape
pheno_code_list_todo=[]
for idx,row in phenotypes_uni_filtered.iloc[start:end].iterrows():
if os.path.exists(ldsc_path.format('{}.{}'.format(prefix,idx))+'.log'):
#print(ldsc_path.format('{}.{}'.format(prefix,idx))+'.log','exists')
continue
print(idx,end=' ')
pheno_code_list_todo.append((idx,row['prevalence']))
"""
phenotypes_filtered['prevalence']=phenotypes_filtered['n_cases']/phenotypes_filtered['n_non_missing']
phenotypes_filtered.shape
pheno_code_list_todo=[]
for idx,row in phenotypes_filtered.iloc[start:end].iterrows():
if os.path.exists(ldsc_path.format('{}.{}'.format(prefix,idx))+'.log'):
continue
print(idx,end=' ')
pheno_code_list_todo.append((idx,row['prevalence']))
"""
```
```
jupyter nbconvert 5_run_ldsc.ipynb --to script
export SCREENDIR=$HOME/.screen
start=0;end=600;mode=uni
python 5_run_ldsc.py $mode 0 10 $start $end
start=0;end=600;mode=bp
python 5_run_ldsc.py $mode 300 10 $start $end && python 5_run_ldsc.py $mode 128 10 $start $end && python 5_run_ldsc.py $mode 64 5 $start $end && python 5_run_ldsc.py $mode 32 5 $start $end && python 5_run_ldsc.py $mode 16 5 $start $end && python 5_run_ldsc.py $mode 8 2 $start $end
```
```
#pool = mp.Pool(processes=15)
#pool.starmap(run_ldsc_wrapper,[(mode,scale,pheno_code,prevelence,prevelence) for (pheno_code,prevelence) in pheno_code_list_todo])
pool = mp.Pool(processes=cores)
#pool.starmap(run_ldsc_wrapper,[(mode,scale,pheno_code) for pheno_code in pheno_code_list_todo])
pool.starmap(run_ldsc_wrapper,[(prefix,scale,pheno_code,prevelence,prevelence) for (pheno_code,prevelence) in pheno_code_list_todo])
```
|
github_jupyter
|
# anesthetic plot gallery
This functions as both some examples of plots that can be produced, and a tutorial.
Any difficulties/issues/requests should be posted as a [GitHub issue](https://github.com/williamjameshandley/anesthetic/issues)
## Download example data
Download some example data from github (or alternatively use your own chains files)
This downloads the PLA chains for the planck baseline cosmology,
and the equivalent nested sampling chains:
```
import requests
import tarfile
for filename in ["plikHM_TTTEEE_lowl_lowE_lensing.tar.gz","plikHM_TTTEEE_lowl_lowE_lensing_NS.tar.gz"]:
github_url = "https://github.com/williamjameshandley/cosmo_example/raw/master/"
url = github_url + filename
open(filename, 'wb').write(requests.get(url).content)
tarfile.open(filename).extractall()
```
## Marginalised posterior plotting
Import anesthetic and load the MCMC samples:
```
%matplotlib inline
import matplotlib.pyplot as plt
from anesthetic import MCMCSamples, make_2d_axes
mcmc_root = 'plikHM_TTTEEE_lowl_lowE_lensing/base_plikHM_TTTEEE_lowl_lowE_lensing'
mcmc = MCMCSamples(root=mcmc_root)
```
We have plotting tools for 1D plots ...
```
fig, axes = mcmc.plot_1d('omegabh2') ;
```
... multiple 1D plots ...
```
fig, axes = mcmc.plot_1d(['omegabh2','omegach2','H0','tau','logA','ns']);
fig.tight_layout()
```
... triangle plots ...
```
mcmc.plot_2d(['omegabh2','omegach2','H0'], types={'lower':'kde','diagonal':'kde'});
```
... triangle plots (with the equivalent scatter plot filling up the left hand side) ...
```
mcmc.plot_2d(['omegabh2','omegach2','H0']);
```
... and rectangle plots.
```
mcmc.plot_2d([['omegabh2','omegach2','H0'], ['logA', 'ns']]);
```
Rectangle plots are pretty flexible with what they can do:
```
mcmc.plot_2d([['omegabh2','omegach2','H0'], ['H0','omegach2']]);
```
## Changing the appearance
Anesthetic tries to follow matplotlib conventions as much as possible, so
most changes to the appearance should be relatively straight forward.
Here are some examples:
* **figure size**:
```
fig = plt.figure(figsize=(5, 5))
fig, axes = make_2d_axes(['omegabh2', 'omegach2', 'H0'], fig=fig, tex=mcmc.tex)
mcmc.plot_2d(axes);
```
* **legends**:
```
fig, axes = make_2d_axes(['omegabh2', 'omegach2', 'H0'], tex=mcmc.tex)
mcmc.plot_2d(axes, label='Posterior');
axes.iloc[-1, 0].legend(bbox_to_anchor=(len(axes), len(axes)), loc='upper left');
```
* **unfilled contours** & **modifying individual axes**:
```
fig, axes = make_2d_axes(['omegabh2', 'omegach2', 'H0'], tex=mcmc.tex)
mcmc.plot_2d(axes.iloc[0:1, :], types=dict(upper='kde', lower='kde', diagonal='kde'), fc=None);
mcmc.plot_2d(axes.iloc[1:2, :], types=dict(upper='kde', lower='kde', diagonal='kde'), fc=None, cmap=plt.cm.Oranges, lw=3);
mcmc.plot_2d(axes.iloc[2:3, :], types=dict(upper='kde', lower='kde', diagonal='kde'), fc='C2', ec='C3', c='C4', lw=2);
```
## Defining new parameters
You can see that samples are stored as a pandas array
```
mcmc[:6]
```
Since it's a (weighted) pandas array, we compute things like the mean and variance
of samples
```
mcmc.mean()
```
We can define new parameters with relative ease.
For example, the default cosmoMC setup does not include omegab, only omegabh2:
```
'omegab' in mcmc
```
However, this is pretty trivial to recompute:
```
h = mcmc['H0']/100
mcmc['omegab'] = mcmc['omegabh2']/h**2
mcmc.tex['omegab'] = '$\Omega_b$'
mcmc.plot_1d('omegab');
```
## Nested sampling plotting
Anethestic really comes to the fore for nested sampling. We can do all of
the above, and more with the power that NS chains provide
```
from anesthetic import NestedSamples
nested_root = 'plikHM_TTTEEE_lowl_lowE_lensing_NS/NS_plikHM_TTTEEE_lowl_lowE_lensing'
nested = NestedSamples(root=nested_root)
```
We can infer the evidence, KL divergence and Bayesian model dimensionality:
```
ns_output = nested.ns_output()
```
This is a set of ``MCMCSamples``, with columns yielding the log of the Bayesian evidence
(logZ), the Kullback-Leibler divergence (D) and the Bayesian model dimensionality (d).
```
ns_output[:6]
```
The evidence, KL divergence and Bayesian model dimensionality, with their corresponding errors, are:
```
for x in ns_output:
print('%10s = %9.2f +/- %4.2f' % (x, ns_output[x].mean(), ns_output[x].std()))
```
Since ``ns_output`` is a set of ``MCMCSamples``, it may be plotted as usual.
Here we illustrate slightly more fine-grained control of the axes construction
(demanding three columns)
```
from anesthetic import make_1d_axes
fig, axes = make_1d_axes(['logZ', 'D', 'd'], ncols=3, tex=ns_output.tex)
ns_output.plot_1d(axes);
```
We can also inspect the correlation between these inferences:
```
ns_output.plot_2d(['logZ','D']);
```
Here is a comparison of the base and NS output
```
h = nested['H0']/100
nested['omegab'] = nested['omegabh2']/h**2
nested.tex['omegab'] = '$\Omega_b$'
fig, axes = mcmc.plot_2d(['sigma8','omegab'])
nested.plot_2d(axes=axes);
```
With nested samples, we can plot the prior (or any temperature), by
passing beta=0. We also introduce here how to create figure legends.
```
prior = nested.set_beta(0)
fig, axes = prior.plot_2d(['ns','tau'], label='prior')
nested.plot_2d(axes=axes, label='posterior')
handles, labels = axes['ns']['tau'].get_legend_handles_labels()
leg = fig.legend(handles, labels)
fig.tight_layout()
```
We can also set up an interactive plot, which allows us to replay a nested
sampling run after the fact.
```
nested.gui()
```
There are also tools for converting to alternative formats, in case you have
pipelines in other plotters:
```
from anesthetic.convert import to_getdist
getdist_samples = to_getdist(nested)
```
|
github_jupyter
|
```
%matplotlib inline
```
Sequence-to-Sequence Modeling with nn.Transformer and TorchText
===============================================================
This is a tutorial on how to train a sequence-to-sequence model
that uses the
`nn.Transformer <https://pytorch.org/docs/master/nn.html?highlight=nn%20transformer#torch.nn.Transformer>`__ module.
PyTorch 1.2 release includes a standard transformer module based on the
paper `Attention is All You
Need <https://arxiv.org/pdf/1706.03762.pdf>`__. The transformer model
has been proved to be superior in quality for many sequence-to-sequence
problems while being more parallelizable. The ``nn.Transformer`` module
relies entirely on an attention mechanism (another module recently
implemented as `nn.MultiheadAttention <https://pytorch.org/docs/master/nn.html?highlight=multiheadattention#torch.nn.MultiheadAttention>`__) to draw global dependencies
between input and output. The ``nn.Transformer`` module is now highly
modularized such that a single component (like `nn.TransformerEncoder <https://pytorch.org/docs/master/nn.html?highlight=nn%20transformerencoder#torch.nn.TransformerEncoder>`__
in this tutorial) can be easily adapted/composed.

Define the model
----------------
In this tutorial, we train ``nn.TransformerEncoder`` model on a
language modeling task. The language modeling task is to assign a
probability for the likelihood of a given word (or a sequence of words)
to follow a sequence of words. A sequence of tokens are passed to the embedding
layer first, followed by a positional encoding layer to account for the order
of the word (see the next paragraph for more details). The
``nn.TransformerEncoder`` consists of multiple layers of
`nn.TransformerEncoderLayer <https://pytorch.org/docs/master/nn.html?highlight=transformerencoderlayer#torch.nn.TransformerEncoderLayer>`__. Along with the input sequence, a square
attention mask is required because the self-attention layers in
``nn.TransformerEncoder`` are only allowed to attend the earlier positions in
the sequence. For the language modeling task, any tokens on the future
positions should be masked. To have the actual words, the output
of ``nn.TransformerEncoder`` model is sent to the final Linear
layer, which is followed by a log-Softmax function.
```
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
class TransformerModel(nn.Module):
def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5):
super(TransformerModel, self).__init__()
from torch.nn import TransformerEncoder, TransformerEncoderLayer
self.model_type = 'Transformer'
self.pos_encoder = PositionalEncoding(ninp, dropout)
encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)
self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
self.encoder = nn.Embedding(ntoken, ninp)
self.ninp = ninp
self.decoder = nn.Linear(ninp, ntoken)
self.init_weights()
def generate_square_subsequent_mask(self, sz):
mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def init_weights(self):
initrange = 0.1
self.encoder.weight.data.uniform_(-initrange, initrange)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-initrange, initrange)
def forward(self, src, src_mask):
src = self.encoder(src) * math.sqrt(self.ninp)
src = self.pos_encoder(src)
output = self.transformer_encoder(src, src_mask)
output = self.decoder(output)
return output
```
``PositionalEncoding`` module injects some information about the
relative or absolute position of the tokens in the sequence. The
positional encodings have the same dimension as the embeddings so that
the two can be summed. Here, we use ``sine`` and ``cosine`` functions of
different frequencies.
```
class PositionalEncoding(nn.Module):
def __init__(self, d_model, dropout=0.1, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0).transpose(0, 1)
self.register_buffer('pe', pe)
def forward(self, x):
x = x + self.pe[:x.size(0), :]
return self.dropout(x)
```
Load and batch data
-------------------
This tutorial uses ``torchtext`` to generate Wikitext-2 dataset. The
vocab object is built based on the train dataset and is used to numericalize
tokens into tensors. Starting from sequential data, the ``batchify()``
function arranges the dataset into columns, trimming off any tokens remaining
after the data has been divided into batches of size ``batch_size``.
For instance, with the alphabet as the sequence (total length of 26)
and a batch size of 4, we would divide the alphabet into 4 sequences of
length 6:
\begin{align}\begin{bmatrix}
\text{A} & \text{B} & \text{C} & \ldots & \text{X} & \text{Y} & \text{Z}
\end{bmatrix}
\Rightarrow
\begin{bmatrix}
\begin{bmatrix}\text{A} \\ \text{B} \\ \text{C} \\ \text{D} \\ \text{E} \\ \text{F}\end{bmatrix} &
\begin{bmatrix}\text{G} \\ \text{H} \\ \text{I} \\ \text{J} \\ \text{K} \\ \text{L}\end{bmatrix} &
\begin{bmatrix}\text{M} \\ \text{N} \\ \text{O} \\ \text{P} \\ \text{Q} \\ \text{R}\end{bmatrix} &
\begin{bmatrix}\text{S} \\ \text{T} \\ \text{U} \\ \text{V} \\ \text{W} \\ \text{X}\end{bmatrix}
\end{bmatrix}\end{align}
These columns are treated as independent by the model, which means that
the dependence of ``G`` and ``F`` can not be learned, but allows more
efficient batch processing.
```
import io
import torch
from torchtext.utils import download_from_url, extract_archive
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
url = 'https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip'
test_filepath, valid_filepath, train_filepath = extract_archive(download_from_url(url))
tokenizer = get_tokenizer('basic_english')
vocab = build_vocab_from_iterator(map(tokenizer,
iter(io.open(train_filepath,
encoding="utf8"))))
def data_process(raw_text_iter):
data = [torch.tensor([vocab[token] for token in tokenizer(item)],
dtype=torch.long) for item in raw_text_iter]
return torch.cat(tuple(filter(lambda t: t.numel() > 0, data)))
train_data = data_process(iter(io.open(train_filepath, encoding="utf8")))
val_data = data_process(iter(io.open(valid_filepath, encoding="utf8")))
test_data = data_process(iter(io.open(test_filepath, encoding="utf8")))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def batchify(data, bsz):
# Divide the dataset into bsz parts.
nbatch = data.size(0) // bsz
# Trim off any extra elements that wouldn't cleanly fit (remainders).
data = data.narrow(0, 0, nbatch * bsz)
# Evenly divide the data across the bsz batches.
data = data.view(bsz, -1).t().contiguous()
return data.to(device)
batch_size = 20
eval_batch_size = 10
train_data = batchify(train_data, batch_size)
val_data = batchify(val_data, eval_batch_size)
test_data = batchify(test_data, eval_batch_size)
```
Functions to generate input and target sequence
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``get_batch()`` function generates the input and target sequence for
the transformer model. It subdivides the source data into chunks of
length ``bptt``. For the language modeling task, the model needs the
following words as ``Target``. For example, with a ``bptt`` value of 2,
we’d get the following two Variables for ``i`` = 0:

It should be noted that the chunks are along dimension 0, consistent
with the ``S`` dimension in the Transformer model. The batch dimension
``N`` is along dimension 1.
```
bptt = 35
def get_batch(source, i):
seq_len = min(bptt, len(source) - 1 - i)
data = source[i:i+seq_len]
target = source[i+1:i+1+seq_len].reshape(-1)
return data, target
```
Initiate an instance
--------------------
The model is set up with the hyperparameter below. The vocab size is
equal to the length of the vocab object.
```
ntokens = len(vocab.stoi) # the size of vocabulary
emsize = 200 # embedding dimension
nhid = 200 # the dimension of the feedforward network model in nn.TransformerEncoder
nlayers = 2 # the number of nn.TransformerEncoderLayer in nn.TransformerEncoder
nhead = 2 # the number of heads in the multiheadattention models
dropout = 0.2 # the dropout value
model = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, dropout).to(device)
```
Run the model
-------------
`CrossEntropyLoss <https://pytorch.org/docs/master/nn.html?highlight=crossentropyloss#torch.nn.CrossEntropyLoss>`__
is applied to track the loss and
`SGD <https://pytorch.org/docs/master/optim.html?highlight=sgd#torch.optim.SGD>`__
implements stochastic gradient descent method as the optimizer. The initial
learning rate is set to 5.0. `StepLR <https://pytorch.org/docs/master/optim.html?highlight=steplr#torch.optim.lr_scheduler.StepLR>`__ is
applied to adjust the learn rate through epochs. During the
training, we use
`nn.utils.clip_grad_norm\_ <https://pytorch.org/docs/master/nn.html?highlight=nn%20utils%20clip_grad_norm#torch.nn.utils.clip_grad_norm_>`__
function to scale all the gradient together to prevent exploding.
```
criterion = nn.CrossEntropyLoss()
lr = 5.0 # learning rate
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.95)
import time
def train():
model.train() # Turn on the train mode
total_loss = 0.
start_time = time.time()
src_mask = model.generate_square_subsequent_mask(bptt).to(device)
for batch, i in enumerate(range(0, train_data.size(0) - 1, bptt)):
data, targets = get_batch(train_data, i)
optimizer.zero_grad()
if data.size(0) != bptt:
src_mask = model.generate_square_subsequent_mask(data.size(0)).to(device)
output = model(data, src_mask)
loss = criterion(output.view(-1, ntokens), targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()
total_loss += loss.item()
log_interval = 200
if batch % log_interval == 0 and batch > 0:
cur_loss = total_loss / log_interval
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches | '
'lr {:02.2f} | ms/batch {:5.2f} | '
'loss {:5.2f} | ppl {:8.2f}'.format(
epoch, batch, len(train_data) // bptt, scheduler.get_lr()[0],
elapsed * 1000 / log_interval,
cur_loss, math.exp(cur_loss)))
total_loss = 0
start_time = time.time()
def evaluate(eval_model, data_source):
eval_model.eval() # Turn on the evaluation mode
total_loss = 0.
src_mask = model.generate_square_subsequent_mask(bptt).to(device)
with torch.no_grad():
for i in range(0, data_source.size(0) - 1, bptt):
data, targets = get_batch(data_source, i)
if data.size(0) != bptt:
src_mask = model.generate_square_subsequent_mask(data.size(0)).to(device)
output = eval_model(data, src_mask)
output_flat = output.view(-1, ntokens)
total_loss += len(data) * criterion(output_flat, targets).item()
return total_loss / (len(data_source) - 1)
```
Loop over epochs. Save the model if the validation loss is the best
we've seen so far. Adjust the learning rate after each epoch.
```
best_val_loss = float("inf")
epochs = 3 # The number of epochs
best_model = None
for epoch in range(1, epochs + 1):
epoch_start_time = time.time()
train()
val_loss = evaluate(model, val_data)
print('-' * 89)
print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | '
'valid ppl {:8.2f}'.format(epoch, (time.time() - epoch_start_time),
val_loss, math.exp(val_loss)))
print('-' * 89)
if val_loss < best_val_loss:
best_val_loss = val_loss
best_model = model
scheduler.step()
```
Evaluate the model with the test dataset
-------------------------------------
Apply the best model to check the result with the test dataset.
```
test_loss = evaluate(best_model, test_data)
print('=' * 89)
print('| End of training | test loss {:5.2f} | test ppl {:8.2f}'.format(
test_loss, math.exp(test_loss)))
print('=' * 89)
```
|
github_jupyter
|
---
_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._
---
# The Series Data Structure
```
import pandas as pd
pd.Series?
purchase_1 = pd.Series({'Name': 'Chris',
'Item Purchased': 'Dog Food',
'Cost': 22.50})
purchase_2 = pd.Series({'Name': 'Kevyn',
'Item Purchased': 'Kitty Litter',
'Cost': 2.50})
purchase_3 = pd.Series({'Name': 'Vinod',
'Item Purchased': 'Bird Seed',
'Cost': 5.00})
df = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store 1', 'Store 1', 'Store 2'])
df.head()
df[['Item Purchased']]
purchase_1 = pd.Series({'Name': 'Chris',
'Item Purchased': 'Dog Food',
'Cost': 22.50})
purchase_2 = pd.Series({'Name': 'Kevyn',
'Item Purchased': 'Kitty Litter',
'Cost': 2.50})
purchase_3 = pd.Series({'Name': 'Vinod',
'Item Purchased': 'Bird Seed',
'Cost': 5.00})
df = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store 1', 'Store 1', 'Store 2'])
df.head()
import pandas as pd
purchase_1 = pd.Series({'Name': 'Chris',
'Item Purchased': 'Dog Food',
'Cost': 22.50})
purchase_2 = pd.Series({'Name': 'Kevyn',
'Item Purchased': 'Kitty Litter',
'Cost': 2.50})
purchase_3 = pd.Series({'Name': 'Vinod',
'Item Purchased': 'Bird Seed',
'Cost': 5.00})
df = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store 1', 'Store 1', 'Store 2'])
df.head()
df.set_index(name=['S'])
df[df['Cost']>3.0]
animals = ['Tiger', 'Bear', 'Moose']
pd.Series(animals)
numbers = [1, 2, 3]
pd.Series(numbers)
animals = ['Tiger', 'Bear', None]
pd.Series(animals)
numbers = [1, 2, None]
pd.Series(numbers)
import numpy as np
np.nan == None
np.nan == np.nan
np.isnan(np.nan)
sports = {'Archery': 'Bhutan',
'Golf': 'Scotland',
'Sumo': 'Japan',
'Taekwondo': 'South Korea'}
s = pd.Series(sports)
s
s.index
s = pd.Series(['Tiger', 'Bear', 'Moose'], index=['India', 'America', 'Canada'])
s
sports = {'Archery': 'Bhutan',
'Golf': 'Scotland',
'Sumo': 'Japan',
'Taekwondo': 'South Korea'}
s = pd.Series(sports, index=['Golf', 'Sumo', 'Hockey'])
s
```
# Querying a Series
```
sports = {'Archery': 'Bhutan',
'Golf': 'Scotland',
'Sumo': 'Japan',
'Taekwondo': 'South Korea'}
s = pd.Series(sports)
s
s.iloc[3]
s.loc['Golf']
s[3]
s['Golf']
sports = {99: 'Bhutan',
100: 'Scotland',
101: 'Japan',
102: 'South Korea'}
s = pd.Series(sports)
s[0] #This won't call s.iloc[0] as one might expect, it generates an error instead
s = pd.Series([100.00, 120.00, 101.00, 3.00])
s
total = 0
for item in s:
total+=item
print(total)
import numpy as np
total = np.sum(s)
print(total)
#this creates a big series of random numbers
s = pd.Series(np.random.randint(0,1000,10000))
s.head()
len(s)
%%timeit -n 100
summary = 0
for item in s:
summary+=item
%%timeit -n 100
summary = np.sum(s)
s+=2 #adds two to each item in s using broadcasting
s.head()
for label, value in s.iteritems():
s.set_value(label, value+2)
s.head()
%%timeit -n 10
s = pd.Series(np.random.randint(0,1000,10000))
for label, value in s.iteritems():
s.loc[label]= value+2
%%timeit -n 10
s = pd.Series(np.random.randint(0,1000,10000))
s+=2
s = pd.Series([1, 2, 3])
s.loc['Animal'] = 'Bears'
s
original_sports = pd.Series({'Archery': 'Bhutan',
'Golf': 'Scotland',
'Sumo': 'Japan',
'Taekwondo': 'South Korea'})
cricket_loving_countries = pd.Series(['Australia',
'Barbados',
'Pakistan',
'England'],
index=['Cricket',
'Cricket',
'Cricket',
'Cricket'])
all_countries = original_sports.append(cricket_loving_countries)
original_sports
cricket_loving_countries
all_countries
all_countries.loc['Cricket']
```
# The DataFrame Data Structure
```
import pandas as pd
purchase_1 = pd.Series({'Name': 'Chris',
'Item Purchased': 'Dog Food',
'Cost': 22.50})
purchase_2 = pd.Series({'Name': 'Kevyn',
'Item Purchased': 'Kitty Litter',
'Cost': 2.50})
purchase_3 = pd.Series({'Name': 'Vinod',
'Item Purchased': 'Bird Seed',
'Cost': 5.00})
df = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store 1', 'Store 1', 'Store 2'])
df.head()
df.loc['Store 2']
type(df.loc['Store 2'])
df.loc['Store 1']
df.loc['Store 1', 'Cost']
df.T
df.T.loc['Cost']
df['Cost']
df.loc['Store 1']['Cost']
df.loc[:,['Name', 'Cost']]
df.drop('Store 1')
df
copy_df = df.copy()
copy_df = copy_df.drop('Store 1')
copy_df
copy_df.drop?
del copy_df['Name']
copy_df
df['Location'] = None
df
```
# Dataframe Indexing and Loading
```
costs = df['Cost']
costs
costs+=2
costs
df
!cat olympics.csv
df = pd.read_csv('olympics.csv')
df.head()
df = pd.read_csv('olympics.csv', index_col = 0, skiprows=1)
df.head()
df.columns
for col in df.columns:
if col[:2]=='01':
df.rename(columns={col:'Gold' + col[4:]}, inplace=True)
if col[:2]=='02':
df.rename(columns={col:'Silver' + col[4:]}, inplace=True)
if col[:2]=='03':
df.rename(columns={col:'Bronze' + col[4:]}, inplace=True)
if col[:1]=='№':
df.rename(columns={col:'#' + col[1:]}, inplace=True)
df.head()
```
# Querying a DataFrame
```
df['Gold'] > 0
only_gold = df.where(df['Gold'] > 0)
only_gold.head()
only_gold['Gold'].count()
df['Gold'].count()
only_gold = only_gold.dropna()
only_gold.head()
only_gold = df[df['Gold'] > 0]
only_gold.head()
len(df[(df['Gold'] > 0) | (df['Gold.1'] > 0)])
df[(df['Gold.1'] > 0) & (df['Gold'] == 0)]
```
# Indexing Dataframes
```
df.head()
df['country'] = df.index
df = df.set_index('Gold')
df.head()
df = df.reset_index()
df.head()
df = pd.read_csv('census.csv')
df.head()
df['SUMLEV'].unique()
df=df[df['SUMLEV'] == 50]
df.head()
columns_to_keep = ['STNAME',
'CTYNAME',
'BIRTHS2010',
'BIRTHS2011',
'BIRTHS2012',
'BIRTHS2013',
'BIRTHS2014',
'BIRTHS2015',
'POPESTIMATE2010',
'POPESTIMATE2011',
'POPESTIMATE2012',
'POPESTIMATE2013',
'POPESTIMATE2014',
'POPESTIMATE2015']
df = df[columns_to_keep]
df.head()
df = df.set_index(['STNAME', 'CTYNAME'])
df.head()
df.loc['Michigan', 'Washtenaw County']
df.loc[ [('Michigan', 'Washtenaw County'),
('Michigan', 'Wayne County')] ]
```
# Missing values
```
df = pd.read_csv('log.csv')
df
df.fillna?
df = df.set_index('time')
df = df.sort_index()
df
df = df.reset_index()
df = df.set_index(['time', 'user'])
df
df = df.fillna(method='ffill')
df.head()
```
|
github_jupyter
|
```
%matplotlib inline
```
This notebook is based on:
https://mne.tools/stable/auto_tutorials/stats-sensor-space/75_cluster_ftest_spatiotemporal.html
# Spatiotemporal permutation F-test on full sensor data
Tests for differential evoked responses in at least
one condition using a permutation clustering test.
The FieldTrip neighbor templates will be used to determine
the adjacency between sensors. This serves as a spatial prior
to the clustering. Spatiotemporal clusters will then
be visualized using custom matplotlib code.
```
from mpl_toolkits.axes_grid1 import make_axes_locatable
from mne.stats import spatio_temporal_cluster_test
from mne.channels import find_ch_adjacency
from mne.viz import plot_compare_evokeds
```
## Read epochs
Your pipeline from the previous notebook(s) should go here. Basically from reading the raw to the epoching needs to be done here.
Once it is all epoched you can continue. _Remember to equalise your conditions!_
The MNE-python stats functions work on a numpy array with the shape:
- n_observations $\times$ n_times $\times$ n_channels/n_vertices
So we need to extract the data and then transform it to the right shape. _Remember_ MNE-python epochs are in the shape:
- n_observations $\times$ n_channels/n_verticies $\times$ n_times
n_channels/n_verticies is because the functions works both on sensor space and source space data.
You should also select just two conditions, e.g. left vs right auditory or auditory vs visual.
```
X = [epochs[k].get_data() for k in event_dict] # as 3D matrix
X = [np.transpose(x, (0, 2, 1)) for x in X] # transpose for clustering
```
## Find the FieldTrip neighbor definition to setup sensor adjacency
```
adjacency, ch_names = find_ch_adjacency(epochs.info, ch_type='eeg')
print(type(adjacency)) # it's a sparse matrix!
plt.imshow(adjacency.toarray(), cmap='gray', origin='lower',
interpolation='nearest')
plt.xlabel('{} EEG'.format(len(ch_names)))
plt.ylabel('{} EEG'.format(len(ch_names)))
plt.title('Between-sensor adjacency')
```
## Compute permutation statistic
How does it work? We use clustering to "bind" together features which are
similar. Our features are the magnetic fields measured over our sensor
array at different times. This reduces the multiple comparison problem.
To compute the actual test-statistic, we first sum all F-values in all
clusters. We end up with one statistic for each cluster.
Then we generate a distribution from the data by shuffling our conditions
between our samples and recomputing our clusters and the test statistics.
We test for the significance of a given cluster by computing the probability
of observing a cluster of that size. For more background read:
Maris/Oostenveld (2007), "Nonparametric statistical testing of EEG- and
MEG-data" Journal of Neuroscience Methods, Vol. 164, No. 1., pp. 177-190.
doi:10.1016/j.jneumeth.2007.03.024
## TASK
Look up what the different parameters in the function does!
```
# set family-wise p-value
p_accept = 0.05
cluster_stats = spatio_temporal_cluster_test(X, n_permutations=2000,
threshold=None, tail=0,
n_jobs=1, buffer_size=None,
adjacency=adjacency)
T_obs, clusters, p_values, _ = cluster_stats
good_cluster_inds = np.where(p_values < p_accept)[0]
```
Note. The same functions work with source estimate. The only differences
are the origin of the data, the size, and the adjacency definition.
It can be used for single trials or for groups of subjects.
## Visualize clusters
**Adjust the visualization to the conditions you have selected!**
```
# configure variables for visualization
colors = {"Aud": "crimson", "Vis": 'steelblue'}
linestyles = {"L": '-', "R": '--'}
# organize data for plotting
evokeds = {cond: epochs[cond].average() for cond in event_id}
# loop over clusters
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
time_inds, space_inds = np.squeeze(clusters[clu_idx])
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
# get topography for F stat
f_map = T_obs[time_inds, ...].mean(axis=0)
# get signals at the sensors contributing to the cluster
sig_times = epochs.times[time_inds]
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))
# plot average test statistic and mark significant sensors
f_evoked = mne.EvokedArray(f_map[:, np.newaxis], epochs.info, tmin=0)
f_evoked.plot_topomap(times=0, mask=mask, axes=ax_topo, cmap='Reds',
vmin=np.min, vmax=np.max, show=False,
colorbar=False, mask_params=dict(markersize=10))
image = ax_topo.images[0]
# create additional axes (for ERF and colorbar)
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel(
'Averaged F-map ({:0.3f} - {:0.3f} s)'.format(*sig_times[[0, -1]]))
# add new axis for time courses and plot time courses
ax_signals = divider.append_axes('right', size='300%', pad=1.2)
title = 'Cluster #{0}, {1} sensor'.format(i_clu + 1, len(ch_inds))
if len(ch_inds) > 1:
title += "s (mean)"
plot_compare_evokeds(evokeds, title=title, picks=ch_inds, axes=ax_signals,
colors=colors, linestyles=linestyles, show=False,
split_legend=True, truncate_yaxis='auto')
# plot temporal cluster extent
ymin, ymax = ax_signals.get_ylim()
ax_signals.fill_betweenx((ymin, ymax), sig_times[0], sig_times[-1],
color='orange', alpha=0.3)
# clean up viz
mne.viz.tight_layout(fig=fig)
fig.subplots_adjust(bottom=.05)
plt.show()
```
## Exercises
- What is the smallest p-value you can obtain, given the finite number of
permutations?
|
github_jupyter
|
```
import os
import xgboost as xgb
import pandas as pd
import numpy as np
from utils import encode_numeric_zscore_list, encode_numeric_zscore_all, to_xy, encode_text_index_list, encode_numeric_log_all
from xgboost.sklearn import XGBClassifier, XGBRegressor
from sklearn import datasets
from sigopt_sklearn.search import SigOptSearchCV
path = "./data/allstate"
inputFilePath = os.path.join(path, "train.csv.zip")
df = pd.read_csv(inputFilePath, compression="zip", header=0, na_values=['NULL'])
df = df.reindex(np.random.permutation(df.index))
df.reset_index(inplace=True, drop=True)
df.drop('id', axis=1, inplace=True)
#df = df.sample(frac=0.01)
#encode categoricals as dummies
encode_text_index_list(df, ['cat1', 'cat2', 'cat3', 'cat4', 'cat5', 'cat6', 'cat7', 'cat8', 'cat9', 'cat10', 'cat11', 'cat12', 'cat13', 'cat14', 'cat15', 'cat16', 'cat17', 'cat18', 'cat19', 'cat20', 'cat21', 'cat22', 'cat23', 'cat24', 'cat25', 'cat26', 'cat27', 'cat28', 'cat29', 'cat30', 'cat31', 'cat32', 'cat33', 'cat34', 'cat35', 'cat36', 'cat37', 'cat38', 'cat39', 'cat40', 'cat41', 'cat42', 'cat43', 'cat44', 'cat45', 'cat46', 'cat47', 'cat48', 'cat49', 'cat50', 'cat51', 'cat52', 'cat53', 'cat54', 'cat55', 'cat56', 'cat57', 'cat58', 'cat59', 'cat60', 'cat61', 'cat62', 'cat63', 'cat64', 'cat65', 'cat66', 'cat67', 'cat68', 'cat69', 'cat70', 'cat71', 'cat72', 'cat73', 'cat74', 'cat75', 'cat76', 'cat77', 'cat78', 'cat79', 'cat80', 'cat81', 'cat82', 'cat83', 'cat84', 'cat85', 'cat86', 'cat87', 'cat88', 'cat89', 'cat90', 'cat91', 'cat92', 'cat93', 'cat94', 'cat95', 'cat96', 'cat97', 'cat98', 'cat99', 'cat100', 'cat101', 'cat102', 'cat103', 'cat104', 'cat105', 'cat106', 'cat107', 'cat108', 'cat109', 'cat110', 'cat111', 'cat112', 'cat113', 'cat114', 'cat115', 'cat116'])
#encode all numeric values to zscored values
encode_numeric_zscore_list(df, ['cont1', 'cont2', 'cont3', 'cont4', 'cont5', 'cont6', 'cont7', 'cont8', 'cont9', 'cont10', 'cont11', 'cont12', 'cont13', 'cont14'])
#discard rows where z-score > 2
df.fillna(0)
# Create x(predictors) and y (expected outcome)
X,Y = to_xy(df, "loss")
# find your SigOpt client token here : https://sigopt.com/user/profile
client_token = "UAJKINHBEGLJVIYYMGWANLUPRORPFRLTJMESGZKNPTHKOSIW"
xgb_params = {
'learning_rate' : [0.01, 0.5],
'n_estimators' : [10, 70],
'max_depth':[3, 50],
'min_child_weight':[1, 15],
'gamma':[0, 1.0],
'subsample':[0.1, 1.0],
'colsample_bytree':[0.1, 1.0],
'max_delta_step': [1,15],
'colsample_bylevel': [0.1, 1.0],
#'lamda': [1,5],
#'alpha': [1,5],
'scale_pos_weight': [0,5],
#'objective': 'reg:linear',
#'booster': ['gblinear', 'gbtree'] ,
#'eval_metric': 'mae',
#'tree_method': ['exact', 'approx']
}
xgb = XGBRegressor()
clf = SigOptSearchCV(xgb, xgb_params, cv=5,
client_token=client_token, n_jobs=25, n_iter=700, verbose=1)
clf.fit(X, Y)
a = XGBRegressor()
a.get_params().keys()
```
|
github_jupyter
|
# Plus proches voisins - évaluation
Comment évaluer la pertinence d'un modèle des plus proches voisins.
```
%matplotlib inline
from papierstat.datasets import load_wines_dataset
df = load_wines_dataset()
X = df.drop(['quality', 'color'], axis=1)
y = df['quality']
from sklearn.neighbors import KNeighborsRegressor
knn = KNeighborsRegressor(n_neighbors=1)
knn.fit(X, y)
prediction = knn.predict(X)
```
Le modèle ne fait pas d'erreur sur tous les exemples de la base de vins. C'est normal puisque le plus proche voisin d'un vin est nécessairement lui-même, la note prédite et la sienne.
```
min(prediction - y), max(prediction - y)
```
Il est difficile dans ces conditions de dire si la prédiction et de bonne qualité. On pourrait estimer la qualité de la prédiction sur un vin nouveau mais il n'y en a aucun pour le moment et ce n'est pas l'ordinateur qui va les fabriquer. On peut peut-être regarder combien de fois le plus proche voisin d'un vin autre que le vin lui-même partage la même note.
```
from sklearn.neighbors import NearestNeighbors
nn = NearestNeighbors(n_neighbors=2)
nn.fit(X)
distance, index = nn.kneighbors(X)
proche = index[:, 1].ravel()
note_proche = [y[i] for i in proche]
```
Il ne reste plus qu'à calculer la différence entre la note d'un vin et celle de son plus proche voisin autre que lui-même.
```
diff = y - note_proche
ax = diff.hist(bins=20, figsize=(3,3))
ax.set_title('Histogramme des différences\nde prédiction')
```
Ca marche pour les deux tiers de la base, pour le tiers restant, les notes diffèrent. On peut maintenant regarder si la distance entre ces deux voisins pourrait être corrélée à cette différence.
```
import pandas
dif = pandas.DataFrame(dict(dist=distance[:,1], diff=diff))
ax = dif.plot(x="dist", y="diff", kind='scatter', figsize=(3,3))
ax.set_title('Graphe XY - distance / différence');
```
Ce n'est pas très lisible. Essayons un autre type de graphique.
```
from seaborn import violinplot, boxplot
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(8,3))
violinplot(x="diff", y="dist", data=dif, ax=ax[0])
ax[0].set_ylim([0,25])
ax[0].set_title('Violons distribution\ndifférence / distance')
boxplot(x="diff", y="dist", data=dif, ax=ax[1])
ax[1].set_title('Boxplots distribution\ndifférence / distance')
ax[1].set_ylim([0,25]);
```
A priori le modèle n'est pas si mauvais, les voisins partageant la même note ont l'air plus proches que ceux qui ont des notes différentes.
```
import numpy
dif['abs_diff'] = numpy.abs(dif['diff'])
from seaborn import jointplot
ax = jointplot("dist", "abs_diff", data=dif[dif.dist <= 10],
kind="kde", space=0, color="g", size=4)
ax.ax_marg_y.set_title('Heatmap distribution distance / différence');
```
Les vins proches se ressemblent pour la plupart. C'est rassurant pour la suite. 61% des vins ont un voisin proche partageant la même note.
```
len(dif[dif['abs_diff'] == 0]) / dif.shape[0]
```
|
github_jupyter
|
<center> <font size=5> <h1>Define working environment</h1> </font> </center>
The following cells are used to:
- Import needed libraries
- Set the environment variables for Python, Anaconda, GRASS GIS and R statistical computing
- Define the ["GRASSDATA" folder](https://grass.osgeo.org/grass73/manuals/helptext.html), the name of "location" and "mapset" where you will to work.
**Import libraries**
```
## Import libraries needed for setting parameters of operating system
import os
import sys
```
<center> <font size=3> <h3>Environment variables when working on Linux Mint</h3> </font> </center>
**Set 'Python' and 'GRASS GIS' environment variables**
Here, we set [the environment variables allowing to use of GRASS GIS](https://grass.osgeo.org/grass64/manuals/variables.html) inside this Jupyter notebook. Please change the directory path according to your own system configuration.
```
### Define GRASS GIS environment variables for LINUX UBUNTU Mint 18.1 (Serena)
# Check is environmental variables exists and create them (empty) if not exists.
if not 'PYTHONPATH' in os.environ:
os.environ['PYTHONPATH']=''
if not 'LD_LIBRARY_PATH' in os.environ:
os.environ['LD_LIBRARY_PATH']=''
# Set environmental variables
os.environ['GISBASE'] = '/home/tais/SRC/GRASS/grass_trunk/dist.x86_64-pc-linux-gnu'
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'bin')
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'script')
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'lib')
#os.environ['PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'etc','python')
os.environ['PYTHONPATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'etc','python')
os.environ['PYTHONPATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'etc','python','grass')
os.environ['PYTHONPATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'etc','python','grass','script')
os.environ['PYTHONLIB'] = '/usr/lib/python2.7'
os.environ['LD_LIBRARY_PATH'] += os.pathsep + os.path.join(os.environ['GISBASE'],'lib')
os.environ['GIS_LOCK'] = '$$'
os.environ['GISRC'] = os.path.join(os.environ['HOME'],'.grass7','rc')
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['HOME'],'.grass7','addons')
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['HOME'],'.grass7','addons','bin')
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['HOME'],'.grass7','addons')
os.environ['PATH'] += os.pathsep + os.path.join(os.environ['HOME'],'.grass7','addons','scripts')
## Define GRASS-Python environment
sys.path.append(os.path.join(os.environ['GISBASE'],'etc','python'))
```
**Import GRASS Python packages**
```
## Import libraries needed to launch GRASS GIS in the jupyter notebook
import grass.script.setup as gsetup
## Import libraries needed to call GRASS using Python
import grass.script as gscript
from grass.script import core as grass
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
**Display current environment variables of your computer**
```
## Display the current defined environment variables
for key in os.environ.keys():
print "%s = %s \t" % (key,os.environ[key])
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
<center> <font size=5> <h1>Define functions</h1> </font> </center>
This section of the notebook is dedicated to defining functions which will then be called later in the script. If you want to create your own functions, define them here.
### Function for computing processing time
The "print_processing_time" is used to calculate and display the processing time for various stages of the processing chain. At the beginning of each major step, the current time is stored in a new variable, using [time.time() function](https://docs.python.org/2/library/time.html). At the end of the stage in question, the "print_processing_time" function is called and takes as argument the name of this new variable containing the recorded time at the beginning of the stage, and an output message.
```
## Import library for managing time in python
import time
## Function "print_processing_time()" compute processing time and printing it.
# The argument "begintime" wait for a variable containing the begintime (result of time.time()) of the process for which to compute processing time.
# The argument "printmessage" wait for a string format with information about the process.
def print_processing_time(begintime, printmessage):
endtime=time.time()
processtime=endtime-begintime
remainingtime=processtime
days=int((remainingtime)/86400)
remainingtime-=(days*86400)
hours=int((remainingtime)/3600)
remainingtime-=(hours*3600)
minutes=int((remainingtime)/60)
remainingtime-=(minutes*60)
seconds=round((remainingtime)%60,1)
if processtime<60:
finalprintmessage=str(printmessage)+str(seconds)+" seconds"
elif processtime<3600:
finalprintmessage=str(printmessage)+str(minutes)+" minutes and "+str(seconds)+" seconds"
elif processtime<86400:
finalprintmessage=str(printmessage)+str(hours)+" hours and "+str(minutes)+" minutes and "+str(seconds)+" seconds"
elif processtime>=86400:
finalprintmessage=str(printmessage)+str(days)+" days, "+str(hours)+" hours and "+str(minutes)+" minutes and "+str(seconds)+" seconds"
return finalprintmessage
```
### Function for creation of configuration file for r.li (landscape units provided as polygons) (multiprocessed)
```
##### Function that create the r.li configuration file for a list of landcover raster.
### It enable to create in one function as many configuration file as the number of raster provided in 'listoflandcoverraster'.
### It could be use only in case study with a several landcover raster and only one landscape unit layer.
### So, the landscape unit layer if fixed and there are the landcover raster which change.
# 'listoflandcoverraster' wait for a list with the name (string) of landcover rasters.
# 'landscape_polygons' wait for the name (string) of the vector layer containing the polygons to be used as landscape units.
# 'uniqueid' wait for the name of the 'landscape_polygons' layer's columns containing unique ID for each landscape unit polygon.
# 'returnlistpath' wait for a boolean value (True/False) according to the fact that a list containing the path to the configuration files is desired.
# 'ncores' wait for a integer corresponding to the number of desired cores to be used for parallelization.
# Import libraries for multiprocessing
import multiprocessing
from multiprocessing import Pool
from functools import partial
# Function that copy the landscape unit raster masks on a new layer with name corresponding to the current 'landcover_raster'
def copy_landscapeunitmasks(current_landcover_raster,base_landcover_raster,landscape_polygons,landscapeunit_bbox,cat):
### Copy the landscape units mask for the current 'cat'
# Define the name of the current "current_landscapeunit_rast" layer
current_landscapeunit_rast=current_landcover_raster.split("@")[0]+"_"+landscape_polygons.split("@")[0]+"_"+str(cat)
base_landscapeunit_rast=base_landcover_raster.split("@")[0]+"_"+landscape_polygons.split("@")[0]+"_"+str(cat)
# Copy the the landscape unit created for the first landcover map in order to match the name of the current landcover map
gscript.run_command('g.copy', overwrite=True, quiet=True, raster=(base_landscapeunit_rast,current_landscapeunit_rast))
# Add the line to the maskedoverlayarea variable
maskedoverlayarea="MASKEDOVERLAYAREA "+current_landscapeunit_rast+"|"+landscapeunit_bbox[cat]
return maskedoverlayarea
# Function that create the r.li configuration file for the base landcover raster and then for all the binary rasters
def create_rli_configfile(listoflandcoverraster,landscape_polygons,uniqueid='cat',returnlistpath=True,ncores=2):
# Check if 'listoflandcoverraster' is not empty
if len(listoflandcoverraster)==0:
sys.exit("The list of landcover raster is empty and should contain at least one raster name")
# Check if rasters provided in 'listoflandcoverraster' exists to avoid error in mutliprocessing
for cur_rast in listoflandcoverraster:
try:
mpset=cur_rast.split("@")[1]
except:
mpset=""
if cur_rast.split("@")[0] not in [x[0] for x in gscript.list_pairs(type='raster',mapset=mpset)]:
sys.exit('Raster <%s> not found' %cur_rast)
# Check if rasters provided in 'listoflandcoverraster' have the same extend and spatial resolution
raster={}
for x, rast in enumerate(raster_list):
raster[x]=gscript.raster_info(rast)
key_list=raster.keys()
for x in key_list[1:]:
for info in ('north','south','east','west','ewres','nsres'):
if not raster[0][info]==raster[x][info]:
sys.exit("Some raster provided in the list have different spatial resolution or extend, please check")
# Get the version of GRASS GIS
version=grass.version()['version'].split('.')[0]
# Define the folder to save the r.li configuration files
if sys.platform=="win32":
rli_dir=os.path.join(os.environ['APPDATA'],"GRASS"+version,"r.li")
else:
rli_dir=os.path.join(os.environ['HOME'],".grass"+version,"r.li")
if not os.path.exists(rli_dir):
os.makedirs(rli_dir)
## Create an ordered list with the 'cat' value of landscape units to be processed.
try:
landscape_polygons_mapset=landscape_polygons.split("@")[1]
except:
landscape_polygons_mapset=list(gscript.parse_command('g.mapset', flags='p'))[0]
dbpath="$GISDBASE/$LOCATION_NAME/%s/sqlite/sqlite.db"%landscape_polygons_mapset
if uniqueid not in list(gscript.parse_command('db.columns', table=landscape_polygons.split("@")[0], database=dbpath)):
sys.exit('Column <%s> not found in vector layer <%s>' %(uniqueid,landscape_polygons.split("@")[0]))
else:
list_cat=[int(x) for x in gscript.parse_command('v.db.select', quiet=True,
map=landscape_polygons, column=uniqueid, flags='c')]
list_cat.sort()
# Declare a empty dictionnary which will contains the north, south, east, west values for each landscape unit
landscapeunit_bbox={}
# Declare a empty list which will contain the path of the configation files created
listpath=[]
# Declare a empty string variable which will contains the core part of the r.li configuration file
maskedoverlayarea=""
# Duplicate 'listoflandcoverraster' in a new variable called 'tmp_list'
tmp_list=list(listoflandcoverraster)
# Set the current landcover raster as the first of the list
base_landcover_raster=tmp_list.pop(0) #The pop function return the first item of the list and delete it from the list at the same time
# Loop trough the landscape units
for cat in list_cat:
# Extract the current landscape unit polygon as temporary vector
tmp_vect="tmp_"+base_landcover_raster.split("@")[0]+"_"+landscape_polygons.split("@")[0]+"_"+str(cat)
gscript.run_command('v.extract', overwrite=True, quiet=True, input=landscape_polygons, cats=cat, output=tmp_vect)
# Set region to match the extent of the current landscape polygon, with resolution and alignement matching the landcover raster
gscript.run_command('g.region', vector=tmp_vect, align=base_landcover_raster)
# Rasterize the landscape unit polygon
landscapeunit_rast=tmp_vect[4:]
gscript.run_command('v.to.rast', overwrite=True, quiet=True, input=tmp_vect, output=landscapeunit_rast, use='cat', memory='3000')
# Remove temporary vector
gscript.run_command('g.remove', quiet=True, flags="f", type='vector', name=tmp_vect)
# Set the region to match the raster landscape unit extent and save the region info in a dictionary
region_info=gscript.parse_command('g.region', raster=landscapeunit_rast, flags='g')
n=str(round(float(region_info['n']),5)) #the config file need 5 decimal for north and south
s=str(round(float(region_info['s']),5))
e=str(round(float(region_info['e']),6)) #the config file need 6 decimal for east and west
w=str(round(float(region_info['w']),6))
# Save the coordinates of the bbox in the dictionary (n,s,e,w)
landscapeunit_bbox[cat]=n+"|"+s+"|"+e+"|"+w
# Add the line to the maskedoverlayarea variable
maskedoverlayarea+="MASKEDOVERLAYAREA "+landscapeunit_rast+"|"+landscapeunit_bbox[cat]+"\n"
# Compile the content of the r.li configuration file
config_file_content="SAMPLINGFRAME 0|0|1|1\n"
config_file_content+=maskedoverlayarea
config_file_content+="RASTERMAP "+base_landcover_raster+"\n"
config_file_content+="VECTORMAP "+landscape_polygons+"\n"
# Create a new file and save the content
configfilename=base_landcover_raster.split("@")[0]+"_"+landscape_polygons.split("@")[0]
path=os.path.join(rli_dir,configfilename)
listpath.append(path)
f=open(path, 'w')
f.write(config_file_content)
f.close()
# Continue creation of r.li configuration file and landscape unit raster the rest of the landcover raster provided
while len(tmp_list)>0:
# Reinitialize 'maskedoverlayarea' variable as an empty string
maskedoverlayarea=""
# Set the current landcover raster as the first of the list
current_landcover_raster=tmp_list.pop(0) #The pop function return the first item of the list and delete it from the list at the same time
# Copy all the landscape units masks for the current landcover raster
p=Pool(ncores) #Create a pool of processes and launch them using 'map' function
func=partial(copy_landscapeunitmasks,current_landcover_raster,base_landcover_raster,landscape_polygons,landscapeunit_bbox) # Set fixed argument of the function
maskedoverlayarea=p.map(func,list_cat) # Launch the processes for as many items in the list and get the ordered results using map function
p.close()
p.join()
# Compile the content of the r.li configuration file
config_file_content="SAMPLINGFRAME 0|0|1|1\n"
config_file_content+="\n".join(maskedoverlayarea)+"\n"
config_file_content+="RASTERMAP "+current_landcover_raster+"\n"
config_file_content+="VECTORMAP "+landscape_polygons+"\n"
# Create a new file and save the content
configfilename=current_landcover_raster.split("@")[0]+"_"+landscape_polygons.split("@")[0]
path=os.path.join(rli_dir,configfilename)
listpath.append(path)
f=open(path, 'w')
f.write(config_file_content)
f.close()
# Return a list of path of configuration files creates if option actived
if returnlistpath:
return listpath
```
### Function for creation of binary raster from a categorical raster (multiprocessed)
```
###### Function creating a binary raster for each category of a base raster.
### The function run within the current region. If a category do not exists in the current region, no binary map will be produce
# 'categorical_raster' wait for the name of the base raster to be used. It is the one from which one binary raster will be produced for each category value
# 'prefix' wait for a string corresponding to the prefix of the name of the binary raster which will be produced
# 'setnull' wait for a boolean value (True, False) according to the fact that the output binary should be 1/0 or 1/null
# 'returnlistraster' wait for a boolean value (True, False) regarding to the fact that a list containing the name of binary raster is desired as return of the function
# 'category_list' wait for a list of interger corresponding to specific category of the base raster to be used
# 'ncores' wait for a integer corresponding to the number of desired cores to be used for parallelization
# Import libraries for multiprocessing
import multiprocessing
from multiprocessing import Pool
from functools import partial
def create_binary_raster(categorical_raster,prefix="binary",setnull=False,returnlistraster=True,category_list=None,ncores=2):
# Check if raster exists to avoid error in mutliprocessing
try:
mpset=categorical_raster.split("@")[1]
except:
mpset=""
if categorical_raster not in gscript.list_strings(type='raster',mapset=mpset):
sys.exit('Raster <%s> not found' %categorical_raster)
# Check for number of cores doesnt exceed available
nbcpu=multiprocessing.cpu_count()
if ncores>=nbcpu:
ncores=nbcpu-1
returnlist=[] #Declare empty list for return
#gscript.run_command('g.region', raster=categorical_raster, quiet=True) #Set the region
null='null()' if setnull else '0' #Set the value for r.mapcalc
minclass=1 if setnull else 2 #Set the value to check if the binary raster is empty
if category_list == None: #If no category_list provided
category_list=[cl for cl in gscript.parse_command('r.category',map=categorical_raster,quiet=True)]
for i,x in enumerate(category_list): #Make sure the format is UTF8 and not Unicode
category_list[i]=x.encode('UTF8')
category_list.sort(key=float) #Sort the raster categories in ascending.
p=Pool(ncores) #Create a pool of processes and launch them using 'map' function
func=partial(get_binary,categorical_raster,prefix,null,minclass) # Set the two fixed argument of the function
returnlist=p.map(func,category_list) # Launch the processes for as many items in the 'functions_name' list and get the ordered results using map function
p.close()
p.join()
if returnlistraster:
return returnlist
#### Function that extract binary raster for a specified class (called in 'create_binary_raster' function)
def get_binary(categorical_raster,prefix,null,minclass,cl):
binary_class=prefix+"_"+cl
gscript.run_command('r.mapcalc', expression=binary_class+'=if('+categorical_raster+'=='+str(cl)+',1,'+null+')',overwrite=True, quiet=True)
if len(gscript.parse_command('r.category',map=binary_class,quiet=True))>=minclass: #Check if created binary is not empty
return binary_class
else:
gscript.run_command('g.remove', quiet=True, flags="f", type='raster', name=binary_class)
```
### Function for computation of spatial metrics at landscape level (multiprocessed)
```
##### Function that compute different landscape metrics (spatial metrics) at landscape level.
### The metric computed are "dominance","pielou","renyi","richness","shannon","simpson".
### It is important to set the computation region before runing this script so that it match the extent of the 'raster' layer.
# 'configfile' wait for the path (string) to the configuration file corresponding to the 'raster' layer.
# 'raster' wait for the name (string) of the landcover map on which landscape metrics will be computed.
# 'returnlistresult' wait for a boolean value (True/False) according to the fact that a list containing the path to the result files is desired.
# 'ncores' wait for a integer corresponding to the number of desired cores to be used for parallelization.
# Import libraries for multiprocessing
import multiprocessing
from multiprocessing import Pool
from functools import partial
def compute_landscapelevel_metrics(configfile, raster, spatial_metric):
filename=raster.split("@")[0]+"_%s" %spatial_metric
outputfile=os.path.join(os.path.split(configfile)[0],"output",filename)
if spatial_metric=='renyi': # The alpha parameter was set to 2 as in https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
gscript.run_command('r.li.%s' %spatial_metric, overwrite=True,
input=raster,config=configfile,alpha='2', output=filename)
else:
gscript.run_command('r.li.%s' %spatial_metric, overwrite=True,
input=raster,config=configfile, output=filename)
return outputfile
def get_landscapelevel_metrics(configfile, raster, returnlistresult=True, ncores=2):
# Check if raster exists to avoid error in mutliprocessing
try:
mpset=raster.split("@")[1]
except:
mpset=""
if raster not in gscript.list_strings(type='raster',mapset=mpset):
sys.exit('Raster <%s> not found' %raster)
# Check if configfile exists to avoid error in mutliprocessing
if not os.path.exists(configfile):
sys.exit('Configuration file <%s> not found' %configfile)
# List of metrics to be computed
spatial_metric_list=["dominance","pielou","renyi","richness","shannon","simpson"]
# Check for number of cores doesnt exceed available
nbcpu=multiprocessing.cpu_count()
if ncores>=nbcpu:
ncores=nbcpu-1
if ncores>len(spatial_metric_list):
ncores=len(spatial_metric_list) #Adapt number of cores to number of metrics to compute
#Declare empty list for return
returnlist=[]
# Create a new pool
p=Pool(ncores)
# Set the two fixed argument of the 'compute_landscapelevel_metrics' function
func=partial(compute_landscapelevel_metrics,configfile, raster)
# Launch the processes for as many items in the 'functions_name' list and get the ordered results using map function
returnlist=p.map(func,spatial_metric_list)
p.close()
p.join()
# Return list of paths to result files
if returnlistresult:
return returnlist
```
### Function for computation of spatial metrics at class level (multiprocessed)
```
##### Function that compute different landscape metrics (spatial metrics) at class level.
### The metric computed are "patch number (patchnum)","patch density (patchdensity)","mean patch size(mps)",
### "coefficient of variation of patch area (padcv)","range of patch area size (padrange)",
### "standard deviation of patch area (padsd)", "shape index (shape)", "edge density (edgedensity)".
### It is important to set the computation region before runing this script so that it match the extent of the 'raster' layer.
# 'configfile' wait for the path (string) to the configuration file corresponding to the 'raster' layer.
# 'raster' wait for the name (string) of the landcover map on which landscape metrics will be computed.
# 'returnlistresult' wait for a boolean value (True/False) according to the fact that a list containing the path to the result files is desired.
# 'ncores' wait for a integer corresponding to the number of desired cores to be used for parallelization.
# Import libraries for multiprocessing
import multiprocessing
from multiprocessing import Pool
from functools import partial
def compute_classlevel_metrics(configfile, raster, spatial_metric):
filename=raster.split("@")[0]+"_%s" %spatial_metric
gscript.run_command('r.li.%s' %spatial_metric, overwrite=True,
input=raster,config=configfile,output=filename)
outputfile=os.path.join(os.path.split(configfile)[0],"output",filename)
return outputfile
def get_classlevel_metrics(configfile, raster, returnlistresult=True, ncores=2):
# Check if raster exists to avoid error in mutliprocessing
try:
mpset=raster.split("@")[1]
except:
mpset=""
if raster not in [x.split("@")[0] for x in gscript.list_strings(type='raster',mapset=mpset)]:
sys.exit('Raster <%s> not found' %raster)
# Check if configfile exists to avoid error in mutliprocessing
if not os.path.exists(configfile):
sys.exit('Configuration file <%s> not found' %configfile)
# List of metrics to be computed
spatial_metric_list=["patchnum","patchdensity","mps","padcv","padrange","padsd","shape","edgedensity"]
# Check for number of cores doesnt exceed available
nbcpu=multiprocessing.cpu_count()
if ncores>=nbcpu:
ncores=nbcpu-1
if ncores>len(spatial_metric_list):
ncores=len(spatial_metric_list) #Adapt number of cores to number of metrics to compute
# Declare empty list for return
returnlist=[]
# Create a new pool
p=Pool(ncores)
# Set the two fixed argument of the 'compute_classlevel_metrics' function
func=partial(compute_classlevel_metrics,configfile, raster)
# Launch the processes for as many items in the 'functions_name' list and get the ordered results using map function
returnlist=p.map(func,spatial_metric_list)
p.close()
p.join()
# Return list of paths to result files
if returnlistresult:
return returnlist
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
<center> <font size=5> <h1>User inputs</h1> </font> </center>
```
## Define a empty dictionnary for saving user inputs
user={}
## Enter the path to GRASSDATA folder
user["gisdb"] = "/home/tais/Documents/GRASSDATA_Spie2017subset_Ouaga"
## Enter the name of the location (existing or for a new one)
user["location"] = "SPIE_subset"
## Enter the EPSG code for this location
user["locationepsg"] = "32630"
## Enter the name of the mapset to use for segmentation
user["mapsetname"] = "test_rli"
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
# Compute spatial metrics for deriving land use in street blocs
**Launch GRASS GIS working session**
```
## Set the name of the mapset in which to work
mapsetname=user["mapsetname"]
## Launch GRASS GIS working session in the mapset
if os.path.exists(os.path.join(user["gisdb"],user["location"],mapsetname)):
gsetup.init(os.environ['GISBASE'], user["gisdb"], user["location"], mapsetname)
print "You are now working in mapset '"+mapsetname+"'"
else:
print "'"+mapsetname+"' mapset doesn't exists in "+user["gisdb"]
```
**Set the path to the r.li folder for configuration file and for results**
```
os.environ
# Define path of the outputfile (in r.li folder)
version=grass.version()['version'].split('.')[0] # Get the version of GRASS GIS
if sys.platform=="win32":
rli_config_dir=os.path.join(os.environ['APPDATA'],"GRASS"+version,"r.li")
rli_output_dir=os.path.join(os.environ['APPDATA'],"GRASS"+version,"r.li","output")
else:
rli_config_dir=os.path.join(os.environ['HOME'],"GRASS"+version,"r.li")
rli_output_dir=os.path.join(os.environ['HOME'],".grass"+version,"r.li","output")
if not os.path.exists(rli_config_dir):
os.makedirs(rli_config_dir)
if not os.path.exists(rli_output_dir):
os.makedirs(rli_output_dir)
# Print
print "GRASS GIS add-on's r.li configuration files will be saved under <%s>."%(rli_config_dir,)
print "GRASS GIS add-on's r.li outputs will be saved under <%s>."%(rli_output_dir,)
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
### Define the name of the base landcover map and landscape units polygons
```
# Set the name of the 'base' landcover map
baselandcoverraster="classif@test_rli"
# Set the name of the vector polygon layer containing the landscape units
landscape_polygons="streetblocks@PERMANENT"
```
### Import shapefile containing street blocks polygons
```
# Set the path to the shapefile containing streetblocks polygons
pathtoshp="/media/tais/data/Dropbox/ULB/MAUPP/Landuse_mapping/Test_spatial_metrics_computation/Data/Subset_spatial_metrics.shp"
# Import shapefile
gscript.run_command('v.in.ogr', quiet=True, overwrite=True, input=pathtoshp, output=landscape_polygons)
```
### Create binary rasters from the base landcover map
```
# Save time for computing processin time
begintime=time.time()
# Create as many binary raster layer as categorical values existing in the base landcover map
gscript.run_command('g.region', raster=baselandcoverraster, quiet=True) #Set the region
pref=baselandcoverraster.split("@")[0]+"_cl" #Set the prefix
raster_list=[] # Initialize a empty list for results
raster_list=create_binary_raster(baselandcoverraster,
prefix=pref,setnull=True,returnlistraster=True,
category_list=None,ncores=15) #Extract binary raster
# Compute and print processing time
print_processing_time(begintime,"Extraction of binary rasters achieved in ")
# Insert the name of the base landcover map at first position in the list
raster_list.insert(0,baselandcoverraster)
# Display the raster to be used for landscape analysis
raster_list
```
## Create r.li configuration file for a list of landcover rasters
```
# Save time for computing processin time
begintime=time.time()
# Run creation of r.li configuration file and associated raster layers
list_configfile=create_rli_configfile(raster_list,landscape_polygons,uniqueid='gid',returnlistpath=True,ncores=20)
# Compute and print processing time
print_processing_time(begintime,"Creation of r.li configuration files achieved in ")
# Display the path to the configuration files created
list_configfile
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
## Compute spatial metrics at landscape level
```
# Initialize an empty list which will contains the resultfiles
resultfiles=[]
# Save time for computing processin time
begintime=time.time()
# Get the path to the configuration file for the base landcover raster
currentconfigfile=list_configfile[0]
# Get the name of the base landcover raster
currentraster=raster_list[0]
# Set the region to match the extent of the base raster
gscript.run_command('g.region', raster=currentraster, quiet=True)
# Launch the processes for as many items in the 'functions_name' list and get the ordered results using map function
resultfiles.append(get_landscapelevel_metrics(currentconfigfile, currentraster, returnlistresult=True, ncores=15))
# Compute and print processing time
print_processing_time(begintime,"Computation of spatial metric achieved in ")
resultfiles
```
## Compute spatial metrics at class level
```
# Save time for computing processin time
begintime=time.time()
# Get a list with paths to the configuration file for class level metrics
classlevelconfigfiles=list_configfile[1:]
# Get a list with name of binary landcover raster for class level metrics
classlevelrasters=raster_list[1:]
for x,currentraster in enumerate(classlevelrasters[:]):
# Get the path to the configuration file for the base landcover raster
currentconfigfile=classlevelconfigfiles[x]
# Set the region to match the extent of the base raster
gscript.run_command('g.region', raster=currentraster, quiet=True)
# Launch the processes for as many items in the 'functions_name' list and get the ordered results using map function
resultfiles.append(get_classlevel_metrics(currentconfigfile, currentraster, returnlistresult=True, ncores=10))
# Compute and print processing time
print_processing_time(begintime,"Computation of spatial metric achieved in ")
resultfiles
# Flat the 'resultfiles' list which contains several lists
resultfiles=[item for sublist in resultfiles for item in sublist]
resultfiles
```
# Compute some special metrics
```
# Set pixel value of 'buildings' on the 'baselandcoverraster'
buildpixel=11
# Set the name of the new layer containing height of buildings
buildings_height='buildings_height'
# Set the name of the nDSM layer
ndsm="ndsm"
# Set the name of the NDVI layer
ndvi="ndvi"
# Set the name of the NDWI layer
ndwi="ndwi"
# Set the prefix of SAR textures layer
SAR_prefix="SAR_w"
```
### Create raster with nDSM value of 'buildings' pixels
```
# Save time for computing processin time
begintime=time.time()
# Create a raster layer with height of pixels classified as 'buildings'
gscript.run_command('g.region', raster=baselandcoverraster, quiet=True) #Set the region
formula="%s=if(%s==%s, %s, null())"%(buildings_height,baselandcoverraster,buildpixel,ndsm)
gscript.mapcalc(formula, overwrite=True)
# Compute and print processing time
print_processing_time(begintime,"Creation of layer in ")
```
### Mean and standard deviation of building's height, SAR textures, NDVI, NDWI
```
# Save time for computing processin time
begintime=time.time()
# Create a raster layer with height of pixels classified as 'buildings'
gscript.run_command('g.region', raster=baselandcoverraster, quiet=True) #Set the region
formula="%s=if(%s==%s, %s, null())"%(buildings_height,baselandcoverraster,buildpixel,ndsm)
gscript.mapcalc(formula, overwrite=True)
# Compute and print processing time
print_processing_time(begintime,"Creation of layer in ")
# Set up a list with name of raster layer to be used
ancillarylayers=[]
ancillarylayers.append(buildings_height)
ancillarylayers.append(ndvi)
ancillarylayers.append(ndwi)
[ancillarylayers.append(x) for x in gscript.list_strings("rast", pattern=SAR_prefix, flag='r')] #Append SAR textures
print "Layer to be used :\n\n"+'\n'.join(ancillarylayers)
# Set the path to the file for i.segment.stats results
isegmentstatsfile=os.path.join(rli_output_dir,"ancillary_info")
# Save time for computing processin time
begintime=time.time()
###### Compute shape metrics as well as mean and stddev of ancillary layers for each landscape unit
## Set number of cores to be used
ncores=len(ancillarylayers)
nbcpu=multiprocessing.cpu_count()
if ncores>=nbcpu:
ncores=nbcpu-1
if ncores>len(ancillarylayers):
ncores=len(ancillarylayers) #Adapt number of cores to number of metrics to compute
# Run i.segment.stats
gscript.run_command('g.region', raster=baselandcoverraster, quiet=True) #Set the region
raster_landscapeunits="temp_%s"%landscape_polygons.split("@")[0]
gscript.run_command('v.to.rast', overwrite=True, input=landscape_polygons, output=raster_landscapeunits, use='cat')
gscript.run_command('i.segment.stats', overwrite=True, map=raster_landscapeunits,
raster_statistics='stddev,median',
area_measures='area,perimeter,compact_circle,compact_square,fd',
rasters=','.join(ancillarylayers),
csvfile=isegmentstatsfile,
processes=ncores)
# Compute and print processing time
print_processing_time(begintime,"Metrics computed in ")
resultfiles.insert(0,isegmentstatsfile)
resultfiles
```
# Combine all .csv files together
```
## Function which execute a left join using individual .csv files.
## This ddddddddddddd
# The argument "indir" wait for a string containing the path to the directory where the individual .csv files are stored.
# The argument "outfile" wait for a string containing the path to the output file to create.
# The argument "overwrite" wait for True/False value allow or not to overwrite existing outfile.
# The argument "pattern" wait for a string containing the pattern of filename to use. Use wildcards is possible (*.csv for all .csv files)
import os,sys
import glob
def leftjoin_csv(fileList,outfile,separator=",",overwrite=False,pattern=None):
# Stop execution if outputfile exitst and can not be overwriten
if os.path.isfile(outfile) and overwrite==False:
print "File '"+str(outfile)+"' aleady exists and overwrite option is not enabled."
else:
if os.path.isfile(outfile) and overwrite==True: # If outputfile exitst and can be overwriten
os.remove(outfile)
print "File '"+str(outfile)+"' has been overwrited."
if len(fileList)<=1: #Check if there are at least 2 files in the list
sys.exit("This function require at least two .csv files to be jointed together.")
# Save all the value in a dictionnary with key corresponding to the first column
outputdict={}
header=[]
header.append("id") #set name for first column
# Loop through all files:
for filenum,f in enumerate([open(f) for f in fileList]):
for linenum,line in enumerate(f):
firstcolumn=line.split(separator)[0]
othercolumns=line.split("\n")[0].split(separator)[1:]
if linenum==0: #If first line
if firstcolumn.split(" ")[0]=="RESULT": #If file comes from r.li.* add-ons
header.append(os.path.split(fileList[filenum])[-1].split(".")[0])
else:
[header.append(x) for x in othercolumns] #If file comes from i.segment.stats
else:
try:
cat_id=firstcolumn.split(" ")[1]
except:
cat_id=firstcolumn
try:
[outputdict[cat_id].append(x) for x in othercolumns]
except:
outputdict[cat_id]=othercolumns
# Write the dictionnary with header in a the output csv file
outputcsv=open(outfile,"w")
outputcsv.write(separator.join(header))
outputcsv.write("\n")
for key in outputdict.keys():
outputcsv.write(key+separator)
outputcsv.write(separator.join(outputdict[key]))
outputcsv.write("\n")
outputcsv.close()
# Create a .csvt file with type of each column
csvt=open(outfile+"t","w")
results=open(outfile,"r")
header=results.next()
typecolumn=[]
typecolumn.append("Integer")
for columns in header[1:]:
typecolumn.append("Real")
csvt.write(separator.join(typecolumn))
csvt.close()
outputcsv.close()
# Print what happend
print str(len(fileList))+" individual .csv files were joint together."
# Join all result files together in a new .csv file
outfile=os.path.join(rli_output_dir,"land_use_metrics.csv")
leftjoin_csv(resultfiles, outfile, separator="|", overwrite=True)
```
# Importing the NDVI layer
```
break
## Saving current time for processing time management
begintime_ndvi=time.time()
## Import nDSM imagery
print ("Importing NDVI raster imagery at " + time.ctime())
gscript.run_command('r.import',
input="/media/tais/data/MAUPP/WorldView3_Ouagadougou/Orthorectified/mosaique_georef/NDVI/ndvi_georef_ordre2.TIF",
output="ndvi", overwrite=True)
# Mask null/nodata values
gscript.run_command('r.null', map="ndvi")
print_processing_time(begintime_ndvi, "imagery has been imported in ")
```
# Importing the nDSM layer
```
break
## Saving current time for processing time management
begintime_ndsm=time.time()
## Import nDSM imagery
print ("Importing nDSM raster imagery at " + time.ctime())
grass.run_command('r.import',
input="/media/tais/data/MAUPP/WorldView3_Ouagadougou/Orthorectified/mosaique_georef/nDSM/nDSM_mosaik_georef_ordre2.tif",
output="ndsm", overwrite=True)
## Define null value for specific value in nDSM raster. Adapt the value to your own data.
# If there is no null value in your data, comment the next line
grass.run_command('r.null', map="ndsm", setnull="-999")
# Make histogram equalisation on grey color.
grass.run_command('r.colors', flags='e', map='ndsm', color='grey')
print_processing_time(begintime_ndsm, "nDSM imagery has been imported in ")
```
### Masking the nDSM artifacts
```
break
# Import vector with nDSM artifacts zones
grass.run_command('v.in.ogr', overwrite=True,
input="/media/tais/data/MAUPP/WorldView3_Ouagadougou/Masque_artifacts_nDSM/Ouaga_mask_artifacts_nDSM.shp",
output="mask_artifacts_ndsm")
## Set computational region to match the default region
grass.run_command('g.region', overwrite=True, raster="ndsm")
# Rasterize the vector layer, with value "0" on the artifacts zones
grass.run_command('v.to.rast', input='mask_artifacts_ndsm', output='mask_artifacts_ndsm',
use='val', value='0', memory='5000')
## Set computational region to match the default region
grass.run_command('g.region', overwrite=True, raster="ndsm")
## Create a new nDSM with artifacts filled with '0' value
formula='tmp_artifact=nmin(ndsm,mask_artifacts_ndsm)'
grass.mapcalc(formula, overwrite=True)
## Remove the artifact mask
grass.run_command('g.remove', flags='f', type='raster', name="mask_artifacts_ndsm")
## Rename the new nDSM
grass.run_command('g.rename', raster='tmp_artifact,ndsm', overwrite=True)
## Remove the intermediate nDSM layer
grass.run_command('g.remove', flags='f', type='raster', name="tmp_artifact")
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
# Define input raster for computing statistics of segments
```
## Display the name of rasters available in PERMANENT and CLASSIFICATION mapset
print grass.read_command('g.list',type="raster", mapset="PERMANENT", flags='rp')
print grass.read_command('g.list',type="raster", mapset=user["classificationA_mapsetname"], flags='rp')
## Define the list of raster layers for which statistics will be computed
inputstats=[]
inputstats.append("opt_blue")
inputstats.append("opt_green")
inputstats.append("opt_red")
inputstats.append("opt_nir")
inputstats.append("ndsm")
inputstats.append("ndvi")
print "Layer to be used to compute raster statistics of segments:\n"+'\n'.join(inputstats)
## Define the list of raster statistics to be computed for each raster layer
rasterstats=[]
rasterstats.append("min")
rasterstats.append("max")
rasterstats.append("range")
rasterstats.append("mean")
rasterstats.append("stddev")
#rasterstats.append("coeff_var") # Seems that this statistic create null values
rasterstats.append("median")
rasterstats.append("first_quart")
rasterstats.append("third_quart")
rasterstats.append("perc_90")
print "Raster statistics to be computed:\n"+'\n'.join(rasterstats)
## Define the list of area measures (segment's shape statistics) to be computed
areameasures=[]
areameasures.append("area")
areameasures.append("perimeter")
areameasures.append("compact_circle")
areameasures.append("compact_square")
areameasures.append("fd")
print "Area measures to be computed:\n"+'\n'.join(areameasures)
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
<center> <font size=5> <h1>Compute objects' statistics</h1> </font> </center>
```
## Saving current time for processing time management
begintime_computeobjstat=time.time()
```
## Define the folder where to save the results and create it if necessary
In the next cell, please adapt the path to the directory where you want to save the .csv output of i.segment.uspo.
```
## Folder in which save processing time output
outputfolder="/media/tais/My_Book_1/MAUPP/Traitement/Ouagadougou/Segmentation_fullAOI_localapproach/Results/CLASSIF/stats_optical"
## Create the folder if does not exists
if not os.path.exists(outputfolder):
os.makedirs(outputfolder)
print "Folder '"+outputfolder+"' created"
```
### Copy data from other mapset to the current mapset
Some data need to be copied from other mapsets into the current mapset.
### Remove current mask
```
## Check if there is a raster layer named "MASK"
if not grass.list_strings("rast", pattern="MASK", mapset=mapsetname, flag='r'):
print 'There is currently no MASK'
else:
## Remove the current MASK layer
grass.run_command('r.mask',flags='r')
print 'The current MASK has been removed'
```
***Copy segmentation raster***
```
## Copy segmentation raster layer from SEGMENTATION mapset to current mapset
grass.run_command('g.copy', overwrite=True,
raster="segmentation_raster@"+user["segmentation_mapsetname"]+",segments")
```
***Copy morphological zone (raster)***
```
## Copy segmentation raster layer from SEGMENTATION mapset to current mapset
grass.run_command('g.copy', overwrite=True,
raster="zone_morpho@"+user["segmentation_mapsetname"]+",zone_morpho")
```
***Copy morphological zone (vector)***
```
## Copy segmentation raster layer from SEGMENTATION mapset to current mapset
grass.run_command('g.copy', overwrite=True,
vector="zone_morpho@"+user["segmentation_mapsetname"]+",zone_morpho")
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
# Compute statistics of segments (Full AOI extend)
### Compute statistics of segment using i.segment.stats
The process is make to compute statistics iteratively for each morphological zones, used here as tiles.
This section uses the ['i.segment.stats' add-on](https://grass.osgeo.org/grass70/manuals/addons/i.segment.stats.html) to compute statistics for each object.
```
## Save name of the layer to be used as tiles
tile_layer='zone_morpho'+'@'+mapsetname
## Save name of the segmentation layer to be used by i.segment.stats
segment_layer='segments'+'@'+mapsetname
## Save name of the column containing area_km value
area_column='area_km2'
## Save name of the column containing morphological type value
type_column='type'
## Save the prefix to be used for the outputfiles of i.segment.stats
prefix="Segstat"
## Save the list of polygons to be processed (save the 'cat' value)
listofregion=list(grass.parse_command('v.db.select', map=tile_layer,
columns='cat', flags='c'))[:]
for count, cat in enumerate(listofregion):
print str(count)+" cat:"+str(cat)
```
```
## Initialize a empty string for saving print outputs
txtcontent=""
## Running i.segment.stats
messagetoprint="Start computing statistics for segments to be classified, using i.segment.stats on "+time.ctime()+"\n"
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
begintime_isegmentstats=time.time()
## Compute total area to be processed for process progression information
processed_area=0
nbrtile=len(listofregion)
attributes=grass.parse_command('db.univar', flags='g', table=tile_layer.split("@")[0], column=area_column, driver='sqlite')
total_area=float(attributes['sum'])
messagetoprint=str(nbrtile)+" region(s) will be processed, covering an area of "+str(round(total_area,3))+" Sqkm."+"\n\n"
print (messagetoprint)
txtcontent+=messagetoprint
## Save time before looping
begintime_isegmentstats=time.time()
## Start loop on morphological zones
count=1
for cat in listofregion[:]:
## Save current time at loop' start.
begintime_current_id=time.time()
## Create a computional region for the current polygon
condition="cat="+cat
outputname="tmp_"+cat
grass.run_command('v.extract', overwrite=True, quiet=True,
input=tile_layer, type='area', where=condition, output=outputname)
grass.run_command('g.region', overwrite=True, vector=outputname, align=segment_layer)
grass.run_command('r.mask', overwrite=True, raster=tile_layer, maskcats=cat)
grass.run_command('g.remove', quiet=True, type="vector", name=outputname, flags="f")
## Save size of the current polygon and add it to the already processed area
size=round(float(grass.read_command('v.db.select', map=tile_layer,
columns=area_column, where=condition,flags="c")),2)
## Print
messagetoprint="Computing segments's statistics for tile n°"+str(cat)
messagetoprint+=" ("+str(count)+"/"+str(len(listofregion))+")"
messagetoprint+=" corresponding to "+str(size)+" km2"
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
## Define the csv output file name, according to the optimization function selected
outputcsv=os.path.join(outputfolder,prefix+"_"+str(cat)+".csv")
## Compute statistics of objets using i.segment.stats only with .csv output (no vectormap output).
grass.run_command('i.segment.stats', overwrite=True, map=segment_layer,
rasters=','.join(inputstats), raster_statistics=','.join(rasterstats),
area_measures=','.join(areameasures), csvfile=outputcsv, processes='20')
## Add the size of the zone to the already processed area
processed_area+=size
## Print
messagetoprint=print_processing_time(begintime_current_id,
"i.segment.stats finishes to process th current tile in ")
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
remainingtile=nbrtile-count
if remainingtile>0:
messagetoprint=str(round((processed_area/total_area)*100,2))+" percent of the total area processed. "
messagetoprint+="Still "+str(remainingtile)+" zone(s) to process."+"\n"
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
else:
messagetoprint="\n"
print (messagetoprint)
txtcontent+=messagetoprint
## Adapt the count
count+=1
## Remove current mask
grass.run_command('r.mask', flags='r')
## Compute processing time and print it
messagetoprint=print_processing_time(begintime_isegmentstats, "Statitics computed in ")
print (messagetoprint)
txtcontent+=messagetoprint
#### Write text file with log of processing time
## Create the .txt file for processing time output and begin to write
f = open(os.path.join(outputfolder,mapsetname+"_processingtime_isegmentstats.txt"), 'w')
f.write(mapsetname+" processing time information for i.segment.stats"+"\n\n")
f.write(txtcontent)
f.close()
## print
print_processing_time(begintime_computeobjstat,"Object statistics computed in ")
```
## Concatenate individuals .csv files and replace unwanted values
BE CAREFUL! Before runing the following cells, please check your data to be sure that it makes sens to replace the 'nan', 'null', or 'inf' values with "0"
```
## Define the outputfile for .csv containing statistics for all segments
outputfile=os.path.join(outputfolder,"all_segments_stats.csv")
print outputfile
# Create a dictionary with 'key' to be replaced by 'values'
findreplacedict={}
findreplacedict['nan']="0"
findreplacedict['null']="0"
findreplacedict['inf']="0"
# Define pattern of file to concatenate
pat=prefix+"_*.csv"
sep="|"
## Initialize a empty string for saving print outputs
txtcontent=""
## Saving current time for processing time management
begintime_concat=time.time()
## Print
messagetoprint="Start concatenate individual .csv files and replacing unwanted values."
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
# Concatenate and replace unwanted values
messagetoprint=concat_findreplace(outputfolder,pat,sep,findreplacedict,outputfile)
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
## Compute processing time and print it
messagetoprint=print_processing_time(begintime_concat, "Process achieved in ")
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
#### Write text file with log of processing time
## Create the .txt file for processing time output and begin to write
filepath=os.path.join(outputfolder,mapsetname+"_processingtime_concatreplace.txt")
f = open(filepath, 'w')
f.write(mapsetname+" processing time information for concatenation of individual .csv files and replacing of unwanted values."+"\n\n")
f.write(txtcontent)
f.close()
```
# Create new database in postgresql
```
# User for postgresql connexion
dbuser="tais"
# Password of user
dbpassword="tais"
# Host of database
host="localhost"
# Name of the new database
dbname="ouaga_fullaoi_localsegment"
# Set name of schema for objects statistics
stat_schema="statistics"
# Set name of table with statistics of segments - FOR OPTICAL
object_stats_table="object_stats_optical"
break
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
# Connect to postgres database
db=None
db=pg.connect(dbname='postgres', user=dbuser, password=dbpassword, host=host)
# Allow to create a new database
db.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
# Execute the CREATE DATABASE query
cur=db.cursor()
#cur.execute('DROP DATABASE IF EXISTS ' + dbname) #Comment this to avoid deleting existing DB
cur.execute('CREATE DATABASE ' + dbname)
cur.close()
db.close()
```
### Create PostGIS Extension in the database
```
break
# Connect to the database
db=pg.connect(database=dbname, user=dbuser, password=dbpassword, host=host)
# Open a cursor to perform database operations
cur=db.cursor()
# Execute the query
cur.execute('CREATE EXTENSION IF NOT EXISTS postgis')
# Make the changes to the database persistent
db.commit()
# Close connection with database
cur.close()
db.close()
```
<center> <font size=4> <h2>Import statistics of segments in a Postgresql database</h2> </font> </center>
## Create new schema in the postgresql database
```
schema=stat_schema
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
# Connect to postgres database
db=None
db=pg.connect(dbname=dbname, user='tais', password='tais', host='localhost')
# Allow to create a new database
db.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
# Execute the CREATE DATABASE query
cur=db.cursor()
#cur.execute('DROP SCHEMA IF EXISTS '+schema+' CASCADE') #Comment this to avoid deleting existing DB
try:
cur.execute('CREATE SCHEMA '+schema)
except Exception as e:
print ("Exception occured : "+str(e))
cur.close()
db.close()
```
## Create a new table
```
# Connect to an existing database
db=pg.connect(database=dbname, user=dbuser, password=dbpassword, host=host)
# Open a cursor to perform database operations
cur=db.cursor()
# Drop table if exists:
cur.execute("DROP TABLE IF EXISTS "+schema+"."+object_stats_table)
# Make the changes to the database persistent
db.commit()
import csv
# Create a empty list for saving of column name
column_name=[]
# Create a reader for the first csv file in the stack of csv to be imported
pathtofile=os.path.join(outputfolder, outputfile)
readercsvSubset=open(pathtofile)
readercsv=csv.reader(readercsvSubset, delimiter='|')
headerline=readercsv.next()
print "Create a new table '"+schema+"."+object_stats_table+"' with header corresponding to the first row of file '"+pathtofile+"'"
## Build a query for creation of a new table with auto-incremental key-value (thus avoiding potential duplicates of 'cat' value)
# All column data-types are set to 'text' in order to be able to import some 'nan', 'inf' or 'null' values present in statistics files
# This table will allow to import all individual csv files in a single Postgres table, which will be cleaned after
query="CREATE TABLE "+schema+"."+object_stats_table+" ("
query+="key_value serial PRIMARY KEY"
query+=", "+str(headerline[0])+" text"
column_name.append(str(headerline[0]))
for column in headerline[1:]:
if column[0] in ('1','2','3','4','5','6','7','8','9','0'):
query+=","
query+=" "+"W"+str(column)+" double precision"
column_name.append("W"+str(column))
else:
query+=","
query+=" "+str(column)+" double precision"
column_name.append(str(column))
query+=")"
# Execute the CREATE TABLE query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
# Close cursor and communication with the database
cur.close()
db.close()
```
## Copy objects statistics from csv to Postgresql database
```
# Connect to an existing database
db=pg.connect(database=dbname, user=dbuser, password=dbpassword, host=host)
# Open a cursor to perform database operations
cur=db.cursor()
## Initialize a empty string for saving print outputs
txtcontent=""
## Saving current time for processing time management
begintime_copy=time.time()
## Print
messagetoprint="Start copy of segments' statistics in the postgresql table '"+schema+"."+object_stats_table+"'"
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
# Create query for copy data from csv, avoiding the header, and updating only the column which are in the csv (to allow auto-incremental key value to wokr)
query="COPY "+schema+"."+object_stats_table+"("+', '.join(column_name)+") "
query+=" FROM '"+str(pathtofile)+"' HEADER DELIMITER '|' CSV;"
# Execute the COPY FROM CSV query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
## Compute processing time and print it
messagetoprint=print_processing_time(begintime_copy, "Process achieved in ")
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
#### Write text file with log of processing time
## Create the .txt file for processing time output and begin to write
filepath=os.path.join(outputfolder,mapsetname+"_processingtime_PostGimport.txt")
f = open(filepath, 'w')
f.write(mapsetname+" processing time information for importation of segments' statistics in the PostGreSQL Database."+"\n\n")
f.write(txtcontent)
f.close()
# Close cursor and communication with the database
cur.close()
db.close()
```
# Drop duplicate values of CAT
Here, we will find duplicates. Indeed, as statistics are computed for each tile (morphological area) and computational region aligned to the pixels raster, some objets could appear in two different tile resulting on duplicates on "CAT" column.
We firs select the "CAT" of duplicated objets and then puting them in a list. Then, for each duplicated "CAT", we select the key-value (primary key) of the smallest object (area_min). The row corresponding to those key-values are then remoed using the "DELETE FROM" query.
```
# Connect to an existing database
db=pg.connect(database=dbname, user=dbuser, password=dbpassword, host=host)
# Open a cursor to perform database operations
cur=db.cursor()
## Initialize a empty string for saving print outputs
txtcontent=""
## Saving current time for processing time management
begintime_removeduplic=time.time()
## Print
messagetoprint="Start removing duplicates in the postgresql table '"+schema+"."+object_stats_table+"'"
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
# Find duplicated 'CAT'
find_duplicated_cat()
# Remove duplicated
count_pass=1
count_removedduplic=0
while len(cattodrop)>0:
messagetoprint="Removing duplicates - Pass "+str(count_pass)
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
find_duplicated_key()
remove_duplicated_key()
messagetoprint=str(len(keytodrop))+" duplicates removed."
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
count_removedduplic+=len(keytodrop)
# Find again duplicated 'CAT'
find_duplicated_cat()
count_pass+=1
messagetoprint="A total of "+str(count_removedduplic)+" duplicates were removed."
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
## Compute processing time and print it
messagetoprint=print_processing_time(begintime_removeduplic, "Process achieved in ")
print (messagetoprint)
txtcontent+=messagetoprint+"\n"
#### Write text file with log of processing time
## Create the .txt file for processing time output and begin to write
filepath=os.path.join(outputfolder,mapsetname+"_processingtime_RemoveDuplic.txt")
f = open(filepath, 'w')
f.write(mapsetname+" processing time information for removing duplicated objects."+"\n\n")
f.write(txtcontent)
f.close()
# Vacuum the current Postgresql database
vacuum(db)
```
# Change the primary key from 'key_value' to 'cat'
```
# Connect to an existing database
db=pg.connect(database=dbname, user=dbuser, password=dbpassword, host=host)
# Open a cursor to perform database operations
cur=db.cursor()
# Build a query to drop the current constraint on primary key
query="ALTER TABLE "+schema+"."+object_stats_table+" \
DROP CONSTRAINT "+object_stats_table+"_pkey"
# Execute the query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
# Build a query to change the datatype of 'cat' to 'integer'
query="ALTER TABLE "+schema+"."+object_stats_table+" \
ALTER COLUMN cat TYPE integer USING cat::integer"
# Execute the query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
# Build a query to add primary key on 'cat'
query="ALTER TABLE "+schema+"."+object_stats_table+" \
ADD PRIMARY KEY (cat)"
# Execute the query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
# Build a query to drop column 'key_value'
query="ALTER TABLE "+schema+"."+object_stats_table+" \
DROP COLUMN key_value"
# Execute the query
cur.execute(query)
# Make the changes to the database persistent
db.commit()
# Vacuum the current Postgresql database
vacuum(db)
# Close cursor and communication with the database
cur.close()
db.close()
```
### Show first rows of statistics
```
# Connect to an existing database
db=pg.connect(database=dbname, user=dbuser, password=dbpassword, host=host)
# Number of line to show (please limit to 100 for saving computing time)
nbrow=15
# Query
query="SELECT * FROM "+schema+"."+object_stats_table+" \
ORDER BY cat \
ASC LIMIT "+str(nbrow)
# Execute query through panda
df=pd.read_sql(query, db)
# Show dataframe
df.head(15)
```
<left> <font size=4> <b> End of classification part </b> </font> </left>
```
print("The script ends at "+ time.ctime())
print_processing_time(begintime_segmentation_full, "Entire process has been achieved in ")
```
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**
|
github_jupyter
|
# Introducción a Python: Sintaxis, Funciones y Booleanos
<img style="float: right; margin: 0px 0px 15px 15px;" src="https://www.python.org/static/community_logos/python-logo.png" width="200px" height="200px" />
> Bueno, ya que sabemos qué es Python, y que ya tenemos las herramientas para trabajarlo, veremos cómo usarlo.
Referencias:
- https://www.kaggle.com/learn/python
___
# 1. Sintaxis básica
## 1.1 Hello, Python!
¿Qué mejor para empezar que analizar el siguiente pedazo de código?
```
work_hours = 0
print(work_hours)
# ¡A trabajar! Como una hora, no menos, como cinco
work_hours = work_hours + 5
if work_hours > 0:
print("Mucho trabajo!")
rihanna_song = "Work " * work_hours
print(rihanna_song)
```
¿Alguien adivina qué salida produce el código anterior?
Bueno, veamos línea por línea qué está pasando:
```
work_hours = 0
```
**Asignación de variable:** la línea anterior crea una variable llamada `work_hours` y le asigna el valor de `0` usando el símbolo `=`.
A diferencia de otros lenguajes (como Java o `C++`), la asignación de variables en Python:
- no necesita que la variable `work_hours` sea declarada antes de asignarle un valor;
- no necesitamos decirle a Python qué tipo de valor tendrá la variable `work_hours` (int, float, str, list...). De hecho, podríamos luego asignarle a `work_hours` otro tipo de valor como un string (cadena de caracteres) o un booleano (`True` o `False`).
```
print(work_hours)
```
**Llamado a una función**: print es una función de Python que imprime el valor pasado a su argumento. Las funciones son llamadas poniendo paréntesis luego de su nombre, y escribiendo sus argumentos (entradas) dentro de dichos paréntesis.
```
# ¡A trabajar! Como una hora, no menos, como cinco
work_hours = work_hours + 5
# work_hours += 5 # Esto es completamente equivalente a la linea de arriba
print(work_hours)
```
La primer línea es un **comentario**, los cuales en Python comienzan con el símbolo `#`.
A continuación se hace una reasignación. En este caso, estamos asignando a la variable `work_hours` un nuevo valor que involucra una operación aritmética en su propio valor previo.
```
if work_hours > 0:
print("Mucho trabajo!")
if work_hours > 10:
print("Mucho trabajo!")
```
Todavía no es tiempo de ver **condicionales**, sin embargo, se puede adivinar fácilmente lo que este pedazo de código hace, ya que se puede leer casi literal.
Notemos que la *indentación* es muy importante acá, y especifica qué parte del código pertenece al `if`. Lo que pertenece al `if` empieza por los dos puntos (`:`) y debe ir indentado en el renglón de abajo. Así que mucho cuidado con la indentación, sobretodo si han programado en otros lenguajes en los que este detalle no implica nada.
Acá vemos un tipo de variable string (cadena de caracteres). Se especifica a Python un objeto tipo string poniendo doble comilla ("") o comilla simple ('').
```
"Work " == 'Work '
rihanna_song = "Work " * work_hours
print(rihanna_song)
a = 5
a
type(a)
a *= "A "
a
type(a)
```
El operador `*` puede ser usado para multiplicar dos números (`3 * 4 evalua en 12`), pero también podemos multiplicar strings por números enteros, y obtenemos un nuevo string que repite el primero esa cantidad de veces.
En Python suceden muchas cosas de este estilo, muchos "truquillos" que ahorran mucho tiempo.
## 1.2 Tipos de números en Python y operaciones aritméticas
Ya vimos un ejemplo de una variable que contenía un número:
```
work_hours = 0
```
Sin embargo, hay varios tipos de "números". Si queremos ser más tecnicos, preguntémosle a Python qué tipo de variable es `work_hours`:
```
type(work_hours)
```
Vemos que es un entero (`int`). Hay otro tipo de número que encontramos en Python:
```
type(0.5)
```
Un número de punto flotante (float) es un número con decimales.
Ya conocemos dos funciones estándar de Python: `print()` y `type()`. La última es bien útil para preguntarle a Python "¿Qué es esto?".
Ahora veamos operaciones aritméticas:
```
# Operación suma(+)/resta(-)
5 + 8, 9 - 3
# Operación multiplicación(*)
5 * 8
# Operación división(/)
6 / 7
# Operación división entera(//)
5 // 2
# Operación módulo(%)
5 % 2
# Exponenciación(**)
2**5
# Bitwise XOR (^)
## 2 == 010
## 5 == 101
## 2^5 == 111 == 1 * 2**2 + 1 * 2**1 + 1 * 2**0 == 7
2^5
```
El orden en que se efectúan las operaciones es justo como nos lo enseñaron en primaria/secundaria:
- PEMDAS: Parentesis, Exponentes, Multiplicación/División, Adición/Sustracción.
Ante la duda siempre usar paréntesis.
```
# Ejemplo de altura con sombrero
altura_sombrero_cm = 20
mi_altura_cm = 183
# Que tan alto soy cuando me pongo sombrero?
altura_total_metros = altura_sombrero_cm + mi_altura_cm / 100
print("Altura total en metros =", altura_total_metros, "?")
# Que tan alto soy cuando me pongo sombrero?
altura_total_metros = (altura_sombrero_cm + mi_altura_cm) / 100
print("Altura total en metros =", altura_total_metros)
import this
```
### 1.2.1 Funciones para trabajar con números
`min()` y `max()` devuelven el mínimo y el máximo de sus argumentos, respectivamente...
```
# min
min(1, 8, -5, 4.4, 4.89)
# max
max(1, 8, -5, 4.4, 4.89)
```
`abs()` devuelve el valor absoluto de su argumeto:
```
# abs
abs(5), abs(-5)
```
Aparte de ser tipos de variable, `float()` e `int()` pueden ser usados como función para convertir su argumento al tipo especificado (esto lo veremos mejor cuando veamos programación orientada a objetos):
```
print(float(10))
print(int(3.33))
# They can even be called on strings!
print(int('807') + 1)
int(8.99999)
```
___
# 2. Funciones y ayuda en Python
## 2.1 Pidiendo ayuda
Ya vimos algunas funciones en la sección anterior (`print()`, `abs()`, `min()`, `max()`), pero, ¿y si se nos olvida que hace alguna de ellas?
Que no pande el cúnico, ahí estará siempre la función `help()` para venir al rescate...
```
# Usar la función help sobre la función round
help(round)
help(max)
# Función round
round(8.99999)
round(8.99999, 2)
round(146, -2)
```
### ¡CUIDADO!
A la función `help()` se le pasa como argumento el nombre de la función, **no la función evaluada**.
Si se le pasa la función evaluada, `help()` dará la ayuda sobre el resultado de la función y no sobre la función como tal.
Por ejemplo,
```
# Help de una función
help(round)
a = round(10.85)
type(a)
# Help de una función evaluada
help(round(10.85))
```
Intenten llamar la función `help()` sobre otras funciones a ver si se encuentran algo interesante...
```
# Help sobre print
help(print)
# Print
print(1, 'a', "Hola, ¿Cómo están?", sep="_este es un separador_", end=" ")
print(56)
```
## 2.2 Definiendo funciones
Las funciones por defecto de Python son de mucha utilidad. Sin embargo, pronto nos daremos cuenta que sería más útil aún definir nuestras propias funciones para reutilizarlas cada vez que las necesitemos.
Por ejemplo, creemos una función que dados tres números, devuelva la mínima diferencia absoluta entre ellos
```
# Explicar acá la forma de definir una función
def diferencia_minima(a, b, c):
diff1 = abs(a - b)
diff2 = abs(a - c)
diff3 = abs(b - c)
return min(diff1, diff2, diff3)
```
Las funciones comienzan con la palabra clave `def`, y el código indentado luego de los dos puntos `:` se corre cuando la función es llamada.
`return` es otra parablra clave que sólo se asocia con funciones. Cuando Python se encuentra un `return`, termina la función inmediatamente y devuelve el valor que hay seguido del `return`.
¿Qué hace específicamente la función que escribimos?
```
# Ejemplo: llamar la función unas 3 veces
diferencia_minima(7, -5, 8)
diferencia_minima(7.4, 7, 0)
diferencia_minima(7, 6, 8)
type(diferencia_minima)
```
Intentemos llamar `help` sobre la función
```
help(diferencia_minima)
```
Bueno, Python tampoco es tan listo como para leer código y entregar una buena descripción de la función. Esto es trabajo del diseñador de la función: incluir la documentación.
¿Cómo se hace? (Recordar añadir un ejemplo)
```
# Copiar y pegar la función, pero esta vez, incluir documentación de la misma
def diferencia_minima(a, b, c):
"""
This function determines the minimum difference between the
three arguments passed a, b, c.
Example:
>>> diferencia_minima(7, -5, 8)
1
"""
diff1 = abs(a - b)
diff2 = abs(a - c)
diff3 = abs(b - c)
return min(diff1, diff2, diff3)
# Volver a llamar el help
help(diferencia_minima)
```
Muy bien. Ahora, podemos observar que podemos llamar esta función sobre diferentes números, incluso de diferentes tipos:
- Si todos son enteros, entonces nos retornará un entero.
- Si hay algún float, nos retornará un float.
```
# Todos enteros
diferencia_minima(1, 1, 4)
# Uno o más floats
diferencia_minima(0., 0., 1)
```
Sin embargo, no todas las entradas son válidas:
```
# String: TypeError
diferencia_minima('a', 'b', 'c')
```
### 2.2.1 Funciones que no devuelven
¿Qué pasa si no incluimos el `return` en nuestra función?
```
# Ejemplo de función sin return
def imprimir(a):
print(a)
# Llamar la función un par de veces
imprimir('Hola a todos')
var = imprimir("Hola a todos")
print(var)
def write_file(a):
with open("file.txt", 'w') as f:
f.write(a)
write_file("Hola a todos")
```
### 2.2.2 Argumentos por defecto
Modificar la función `saludo` para que tenga un argumento por defecto.
```
# Función saludo con argumento por defecto
def greetings(name="Ashwin"):
# print(f"Welcome, {name}!")
# print("Welcome, " + name + "!")
# print("Welcome, ", name, "!", sep="")
print("Welcome, {}!".format(name))
# print("Welcome, %s!" %name)
greetings("Alejandro")
greetings()
```
___
# 3. Booleanos y condicionales
## 3.1 Booleanos
Python tiene un tipo de objetos de tipo `bool` los cuales pueden tomar uno de dos valores: `True` o `False`.
Ejemplo:
```
x = True
print(x)
print(type(x))
```
Normalmente no ponemos `True` o `False` directamente en nuestro código, sino que más bien los obtenemos luego de una operación booleana (operaciones que dan como resultado `True` o `False`).
Ejemplos de operaciones:
```
# ==
3 == 3.
# !=
2.99999 != 3
# <
8 < 5
# >
8 > 5
# <=
4 <= 4
# >=
5 >= 8
```
**Nota:** hay una diferencia enorme entre `==` e `=`. Con el primero estamos preguntando acerca del valor (`n==2`: ¿es `n` igual a `2`?), mientras que con el segundo asignamos un valor (`n=2`: `n` guarda el valor de `2`).
Ejemplo: escribir una función que dado un número nos diga si es impar
```
# Función para encontrar números impares
def odd(num_int):
return (num_int % 2) != 0
def odd(num_int):
if (num_int % 2) != 0:
return True
return False
# Probar la función
odd(5), odd(32)
(5, 4, 3) == ((5, 4, 3))
```
### 3.1.1 Combinando valores booleanos
Python también nos provee operadores básicos para operar con valores booleanos: `and`, `or`, y `not`.
Por ejemplo, podemos definir una función para ver si vale la pena llegar a la taquería de la esquina:
```
# Función: ¿vale la pena ir a la taquería? distancia, clima, paraguas ...
def vale_la_pena_ir_taqueria(distancia, clima, paraguas):
return (distancia <= 100) and (clima != 'lluvioso' or paraguas == True)
# Probar función
vale_la_pena_ir_taqueria(distancia=50,
clima="soleado",
paraguas=False)
vale_la_pena_ir_taqueria(distancia=50,
clima="lluvioso",
paraguas=False)
```
También podemos combinar más de dos valores: ¿cuál es el resultado de la siguiente expresión?
```
(True or True) and False
```
Uno puede tratar de memorizarse el orden de las operaciones lógicas, así como el de las aritméticas. Sin embargo, en línea con la filosofía de Python, el uso de paréntesis enriquece mucho la legibilidad y no quedan lugares a dudas.
Los siguientes códigos son equivalentes, pero, ¿cuál se lee mejor?
```
have_umbrella = True
rain_level = 4
have_hood = True
is_workday = False
prepared_for_weather = have_umbrella or rain_level < 5 and have_hood or not rain_level > 0 and is_workday
prepared_for_weather
prepared_for_weather = have_umbrella or (rain_level < 5 and have_hood) or not (rain_level > 0 and is_workday)
prepared_for_weather
prepared_for_weather = have_umbrella or ((rain_level < 5) and have_hood) or (not (rain_level > 0 and is_workday))
prepared_for_weather
prepared_for_weather = (
have_umbrella
or ((rain_level < 5) and have_hood)
or (not (rain_level > 0 and is_workday))
)
prepared_for_weather
```
___
## 3.2 Condicionales
Aunque los booleanos son útiles en si, dan su verdadero salto a la fama cuando se combinan con cláusulas condicionales, usando las palabras clave `if`, `elif`, y `else`.
Los condicionales nos permiten ejecutar ciertas partes de código dependiendo de alguna condición booleana:
```
# Función de inspección de un número
def inspeccion(num):
if num == 0:
print('El numero', num, 'es cero')
elif num > 0:
print('El numero', num, 'es positivo')
elif num < 0:
print('El numero', num, 'es negativo')
else:
print('Nunca he visto un numero como', num)
# Probar la función
inspeccion(1), inspeccion(-1), inspeccion(0)
```
- `if` y `else` se utilizan justo como en otros lenguajes.
- Por otra parte, la palabra clave `elif` es una contracción de "else if".
- El uso de `elif` y de `else` son opcionales.
- Adicionalmente, se pueden incluir tantos `elif` como se requieran.
Como en las funciones, el bloque de código correspondiente al condicional empieza luego de los dos puntos (`:`), y lo que sigue está indentado 4 espacios (tabulador). Pertenece al condicional todo lo que esté indentado hasta que encontremos una línea sin indentación.
Por ejemplo, analicemos la siguiente función:
```
def f(x):
if x > 0:
print("Only printed when x is positive; x =", x)
print("Also only printed when x is positive; x =", x)
print("Always printed, regardless of x's value; x =", x)
f(-1)
```
### 3.2.1 Conversión a booleanos
Ya vimos que la función `int()` convierte sus argumentos en enteros, y `float()` los convierte en números de punto flotante.
De manera similar `bool()` convierte sus argumentos en booleanos.
```
print(bool(1)) # Todos los números excepto el cero 0 se tratan como True
print(bool(0))
print(bool("asf")) # Todos los strings excepto el string vacío "" se tratan como True
print(bool("")) # No confundir el string vacío "" con un espacio " "
bool(" ")
```
Por ejemplo, ¿qué imprime el siguiente código?
```
if 0:
print(0)
elif "tocino":
print("tocino")
```
Las siguientes celdas son equivalentes. Sin embargo, por la legibilidad preferimos la primera:
```
x = 10
if x != 0:
print('Estoy contento')
else:
print('No estoy tan contento')
if x:
print('Estoy contento')
else:
print('No estoy tan contento')
```
### 3.2.2 Expresiones condicionales
Es muy común que una variable pueda tener dos valores, dependiendo de alguna condición:
```
# Función para ver si pasó o no dependiendo de la nota
def mensaje_calificacion(nota):
"""
Esta función imprime si pasaste o no de acuerdo a la nota obtenida.
La minima nota aprobatoria es de 6.
>>> mensaje_calificacion(9)
Pasaste la materia, con una nota de 9
>>> mensaje_calificacion(5)
Reprobaste la materia, con una nota de 5
"""
if nota >= 6:
print('Pasaste la materia, con una nota de', nota)
else:
print('Reprobaste la materia, con una nota de', nota)
mensaje_calificacion(5)
mensaje_calificacion(7)
mensaje_calificacion(10)
```
Por otra parte, Python permite escribir este tipo de expresiones en una sola línea, lo que resulta muy últil y muy legible:
```
# Función para ver si pasó o no dependiendo de la nota
def mensaje_calificacion(nota):
"""
Esta función imprime si pasaste o no de acuerdo a la nota obtenida.
>>> mensaje_calificacion(9)
Pasaste la materia, con una nota de 9
>>> mensaje_calificacion(5)
Reprobaste la materia, con una nota de 5
"""
resultado = 'Pasaste' if nota >= 6 else 'Reprobaste'
print(resultado + ' la materia, con una nota de', nota)
mensaje_calificacion(5)
mensaje_calificacion(7)
```
___
Hoy vimos:
- La sintaxis básica de Python, los tipos de variable int, float y str, y algunas funciones básicas.
- Cómo pedir ayuda de las funciones, y como construir nuestras propias funciones.
- Variables Booleanas y condicionales.
Para la próxima clase:
- Tarea 1 para el miércoles (23:59).
<script>
$(document).ready(function(){
$('div.prompt').hide();
$('div.back-to-top').hide();
$('nav#menubar').hide();
$('.breadcrumb').hide();
$('.hidden-print').hide();
});
</script>
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by jfraustro.
</footer>
|
github_jupyter
|
```
import numpy as np
import random
import pandas as pd
import sklearn
from matplotlib import pyplot as plt
plt.rcParams['figure.figsize'] = (10.0, 8.0)
from sklearn.datasets import make_biclusters
from sklearn.datasets import samples_generator as sg
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn import preprocessing
# from sklearn.cluster.bicluster import SpectralCoclustering
from sklearn.metrics import consensus_score
from sklearn.metrics.cluster import normalized_mutual_info_score
from sklearn.metrics.cluster import adjusted_rand_score
from biclustering import DeltaBiclustering, MSR
%pylab inline
def generate_dataset(option, noise=1, noise_background=True, shuffle=False):
"""
This function generates syntetic datasets as described in the paper
(http://cs-people.bu.edu/panagpap/Research/Bio/bicluster_survey.pdf)
- Figure 4.
Params
option (str): bicluster structure ('a' to 'i')
noise (int): value of the noise in the matrix
noise_background (bool): positions where is not a bicluster should contain noise
if this parameter is set to True
shuffle (bool): shuffle lines and columns of the matrix if this parameter is set
to True
Returns
data (array_like): matrix generated
"""
shape = (150,150)
n,m = shape
# values shouldn't be a lot far...
centers = [20, 40, 60, 80, 100]
y_row = np.zeros(150)
y_col = np.zeros(150)
if noise_background:
data = np.random.rand(n, m)*100
else:
data = np.zeros(n*m).reshape(shape)
if option == 'a':
data[60:110][:,70:140] = np.random.rand(50,70)*noise + centers[0]
y_row[60:110] += 1
y_col[70:140] += 1
elif option == 'd':
data[0:50][:,0:70] = np.random.rand(50,70)*noise + centers[0]
y_row[0:50] += 1
y_col[0:70] += 1
data[50:100][:,50:100] = np.random.rand(50,50)*noise + centers[2]
y_row[50:100] += 2
y_col[50:100] += 2
data[100:150][:,80:150] = np.random.rand(50,70)*noise + centers[1]
y_row[100:150] += 3
y_col[80:150] += 3
elif option == 'e':
data[0:70][:,0:50] = np.random.rand(70,50)*noise + centers[3]
y_row[0:70] += 1
y_col[0:50] += 1
data[50:100][:,50:100] = np.random.rand(50,50)*noise + centers[1]
y_row[50:100] += 2
y_col[50:100] += 2
data[80:150][:,100:150] = np.random.rand(70,50)*noise + centers[2]
y_row[80:150] += 3
y_col[100:150] += 3
elif option == 'f':
data[0:50][:,0:40] = np.random.rand(50,40)*noise + centers[4]
y_row[0:50] += 1
y_col[0:40] += 1
data[50:150][:,0:40] = np.random.rand(100,40)*noise + centers[0]
y_row[50:150] += 2
data[110:150][:,40:95] = np.random.rand(40,55)*noise + centers[2]
y_row[110:150] += 3
y_col[40:95] += 2
data[110:150][:,95:150] = np.random.rand(40,55)*noise + centers[1]
y_row[110:150] += 3
y_col[95:150] += 3
elif option == 'g':
data[0:110][:,0:40] = np.random.rand(110,40)*noise + centers[0]
data[110:150][:,0:110] = np.random.rand(40,110)*noise + centers[2]
data[40:150][:,110:150] = np.random.rand(110,40)*noise + centers[1]
data[0:40][:,40:150] = np.random.rand(40,110)*noise + centers[3]
elif option == 'h':
data[0:90][:,0:90] = np.random.rand(90,90)*noise + centers[0]
data[35:55][:,35:55] = (np.random.rand(20,20)*noise + centers[1]) + data[35:55][:,35:55]
data[110:140][:,35:90] = np.random.rand(30,55)*noise + centers[4]
data[0:140][:,110:150] = np.random.rand(140,40)*noise + centers[2]
data[0:55][:,130:150] = (np.random.rand(55,20)*noise + centers[3]) + data[0:55][:,130:150]
elif option == 'i':
data[20:70][:,20:70] = np.random.rand(50,50)*noise + centers[0]
data[20:70][:,100:150] = np.random.rand(50,50)*noise + centers[1]
data[50:110][:,50:120] = np.random.rand(60,70)*noise + centers[2]
data[120:150][:,20:100] = np.random.rand(30,80)*noise + centers[3]
if shuffle:
np.random.shuffle(data)
np.random.shuffle(data.T)
return data, y_row, y_col
from numba import jit
@jit(nopython=True)
def compute_U(S, V, m, k):
V_tilde = np.dot(S, V.T)
U_new = np.empty([m, k])
for i in xrange(m):
errors = np.empty(k)
for row_clust_ind in xrange(k):
errors[row_clust_ind] = np.sum((X[i][:] - V_tilde[row_clust_ind][:])**2)
ind = np.argmin(errors)
U_new[i][ind] = 1
return U_new
def fnmtf(X, k, l, num_iter=10, norm=True):
m, n = X.shape
U = np.random.rand(m,k)
S = np.random.rand(k,l)
V = np.random.rand(n,l)
if norm:
X = preprocessing.normalize(X)
for i in xrange(num_iter):
S = pinv(U.T.dot(U)).dot(U.T).dot(X).dot(V).dot(pinv(V.T.dot(V)))
# solve subproblem to update V
U_tilde = U.dot(S)
V_new = np.zeros(n*l).reshape(n, l)
for j in range(n):
errors = np.zeros(l)
for col_clust_ind in xrange(l):
errors[col_clust_ind] = ((X[:][:, j] - U_tilde[:][:, col_clust_ind])**2).sum()
ind = np.argmin(errors)
V_new[j][ind] = 1
V = V_new
# while np.linalg.det(V.T.dot(V)) <= 0:
# erros = (X - U.dot(S).dot(V.T)) ** 2
# erros = np.sum(erros.dot(V), axis=0) / np.sum(V, axis=0)
# erros[np.where(np.sum(V, axis=0) <= 1)] = -inf
# quantidade = np.sum(V, axis=0)
# indexMin = np.argmin(quantidade)
# indexMax = np.argmax(erros)
# indexes = np.nonzero(V[:, indexMax])[0]
# for j in indexes:
# if np.random.rand(1) > 0.5:
# V[j, indexMax] = 0
# V[j, indexMin] = 1
# solve subproblem to update U
U = compute_U(S, V, m, k)
# while np.linalg.det(U.T.dot(U)) <= 0:
# erros = (X - U.dot(V_tilde)) ** 2
# erros = np.sum(U.T.dot(erros), axis=1) / np.sum(U, axis=0)
# erros[np.where(np.sum(U, axis=0) <= 1)] = -np.inf
# quantidade = np.sum(U, axis=0)
# indexMin = np.argmin(quantidade)
# indexMax = np.argmax(erros)
# indexes = np.nonzero(U[:, indexMax])[0]
# end = len(indexes)
# indexes_p = np.random.permutation(end)
# U[indexes[indexes_p[0:np.floor(end/2.0)]], indexMax] = 0.0
# U[indexes[indexes_p[0:np.floor(end/2.0)]], indexMin] = 1.0
rows_ind = np.argmax(U, axis=1)
cols_ind = np.argmax(V, axis=1)
return U, S, V, rows_ind, cols_ind
# m, n = (40, 35)
# X = .01 * np.random.rand(m,n)
# X[0:10][:, 0:10] = 1 + .01 * np.random.random()
# X[30:40][:, 20:35] = 1 + .01 * np.random.random()
# X[20:30][:, 20:35] = .6 + .01 * np.random.random()
# X[30:40][:, 36:40] = 1 + .01 * np.random.random()
# m, n = (6, 8)
# X = .01 * np.random.rand(m,n)
# X[0:2][:, 0:4] = 1 + .01 * np.random.random()
# X[2:4][:, 4:8] = .6 + .01 * np.random.random()
# X[4:6][:, 0:8] = .8 + .01 * np.random.random()
plt.matshow(X, cmap=plt.cm.Blues)
plt.title('Original data')
plt.grid()
plt.show()
U, S, V, rows_ind, cols_ind = fnmtf(X, 3, 2, norm=False)
def plot_factorization_result(U, S, V):
fig = plt.figure()
ax = fig.add_subplot(2, 2, 1)
ax.matshow(U.dot(S).dot(V.T), cmap=plt.cm.Blues)
ax.set_title('reconstruction')
ax.grid()
ax2 = fig.add_subplot(2, 2, 2)
ax2.matshow(U, cmap=plt.cm.Blues)
ax2.set_title('U*S')
ax2.grid()
ax3 = fig.add_subplot(2, 2, 3)
ax3.matshow(S, cmap=plt.cm.Blues)
ax3.set_title('S')
ax3.grid()
ax4 = fig.add_subplot(2, 2, 4)
ax4.matshow(V.T, cmap=plt.cm.Blues)
ax4.set_title('S*V\'')
ax4.grid()
plt.show()
def scores(labels_true, labels_pred, row=True):
if row:
print 'Rows scores'
else:
print 'Cols scores'
print 'Random score: %s' % adjusted_rand_score(labels_true, labels_pred)
print 'Normalized mutual information score: %s' % normalized_mutual_info_score(labels_true, labels_pred)
print ''
plot_factorization_result(U, S, V)
scores(rows_ind, [0, 0, 1, 1, 2, 2])
scores(cols_ind, [0, 0, 0, 0, 1, 1, 1, 1], row=False)
X, x_labels, y_labels = generate_dataset('d', noise_background=False, shuffle=False)
temp, _, _ = generate_dataset('d', noise_background=False)
fig = plt.figure()
ax1 = fig.add_subplot(1, 2, 1)
ax1.matshow(temp, cmap=plt.cm.Blues)
ax1.set_title('Original data')
ax1.grid()
ax2 = fig.add_subplot(1, 2, 2)
ax2.matshow(X, cmap=plt.cm.Blues)
ax2.set_title('Shuffled data')
ax2.grid()
plt.show()
import time
t1 = time.time()
U, S, V, rows_ind, cols_ind = fnmtf(X, 3, 3, norm=False)
t2 = time.time()
print ('dt: {} secs'.format(t2-t1))
plot_factorization_result(U, S, V)
scores(rows_ind, x_labels)
scores(cols_ind, y_labels, row=False)
%load_ext Cython
%%cython
import cython
cimport cython
import numpy as np
cimport numpy as np
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
def fnmtf_improved(double[:, ::1] X, int k, int l, int num_iter=100, int norm=0):
cdef int m = X.shape[0]
cdef int n = X.shape[1]
cdef unsigned int i = 0
cdef unsigned int j = 0
cdef unsigned int iter_index = 0
cdef unsigned int row_clust_ind = 0
cdef unsigned int col_clust_ind = 0
cdef unsigned int ind = 0
cdef double[:, ::1] U = np.random.rand(m, k).astype(np.float64)
cdef double[:, ::1] U_best = np.random.rand(m, k).astype(np.float64)
cdef double[:, ::1] S = np.random.rand(k, l).astype(np.float64)
cdef double[:, ::1] S_best = np.random.rand(k, l).astype(np.float64)
cdef double[:, ::1] V = np.random.rand(n, l).astype(np.float64)
cdef double[:, ::1] V_best = np.random.rand(n, l).astype(np.float64)
cdef double[:, ::1] U_tilde = np.empty((m, l), dtype=np.float64)
cdef double[:, ::1] V_new = np.empty((n, l), dtype=np.float64)
cdef double[:, ::1] V_tilde = np.empty((l, n), dtype=np.float64)
cdef double[:, ::1] U_new = np.empty((m, k), dtype=np.float64)
cdef double error_best = 10e9999
cdef double error = 10e9999
cdef double[:] errors_v = np.zeros(l, dtype=np.float64)
cdef double[:] errors_u = np.zeros(k, dtype=np.float64)
for iter_index in range(num_iter):
S[:, :] = np.dot( np.dot(np.linalg.pinv(np.dot(U.T, U)), np.dot(np.dot(U.T, X), V)), np.linalg.pinv(np.dot(V.T, V)) )
# solve subproblem to update V
U_tilde[:, :] = np.dot(U, S)
V_new[:, :] = np.empty((n, l), dtype=np.int)
for j in range(n):
errors_v = np.zeros(l, dtype=np.float64)
for col_clust_ind in range(l):
errors_v[col_clust_ind] = np.sum(np.square(np.subtract(X[:, j], U_tilde[:, col_clust_ind])))
ind = np.argmin(errors_v)
V_new[j, ind] = 1.0
V[:, :] = V_new
# solve subproblem to update U
V_tilde[:, :] = np.dot(S, V.T)
U_new[:, :] = np.empty((m, k), dtype=np.int)
for i in range(m):
errors_u = np.zeros(k, dtype=np.float64)
for row_clust_ind in range(k):
errors_u[row_clust_ind] = np.sum(np.square(np.subtract(X[i, :], V_tilde[row_clust_ind, :])))
ind = np.argmin(errors_u)
U_new[i, ind] = 1.0
U[:, :] = U_new
error_ant = error
error = np.sum(np.square(np.subtract(X, np.dot(np.dot(U, S), V.T))))
if error < error_best:
U_best[:, :] = U
S_best[:, :] = S
V_best[:, :] = V
error_best = error
import time
X, x_labels, y_labels = generate_dataset('d', noise_background=False, shuffle=False)
t1 = time.time()
U, S, V, rows_ind, cols_ind = fnmtf_improved(X, 3, 3)
t2 = time.time()
print ('dt: {} secs'.format(t2-t1))
plot_factorization_result(U, S, V)
scores(rows_ind, x_labels)
scores(cols_ind, y_labels, row=False)
```
|
github_jupyter
|
## Assigning gender based on first name
A straightforward task in natural language processing is to assign gender based on first name. Social scientists are often interested in gender inequalities and may have a dataset that lists name but not gender, such as a list of journal articles with authors in a study of gendered citation practices.
Assigning gender based on name is usually done by comparing a given name with the name's gender distribution on official records, such as the US Social Security baby name list. While this works for most names, some names, such as Gershun or Hunna, are too rare to have reliable estimates based on most available official records. Other names, such as Jian or Blake, are common among both men and women. A fourth category of names are those which are dispropriately one gender or another, but do have non-trivial numbers of a different gender, such as Cody or Kyle. For both these names and androgynous names, their are often generational differences in the gendered distribution.
The most efficient way to gender names in Python is with the `gender_guesser` library, which is based on Jörg Michael's multinational list of more than 48,000 names. The first time you use the library, you may need to install it:
`%pip install gender_guesser`
The `gender_guesser` library is set up so that first you import the gender function and then create a detector. In my case, the detector is named `d` and one parameter is passed, which instructors the detector to ignore capitalization.
```
import gender_guesser.detector as gender
d = gender.Detector(case_sensitive=False)
```
When passed a name, the detector's `get_gender` returns either 'male', 'female', 'mostly_male', 'mostly_female', 'andy' (for androgenous names), or 'unknown' (for names not in the dataset).
```
d.get_gender("Barack")
d.get_gender("Theresa")
d.get_gender("JAMIE")
d.get_gender("sidney")
d.get_gender("Tal")
```
In almost all cases, you will want to analyze a large list of names, rather than a single name. For example, the University of North Carolina, Chapel Hill makes available salary information on employees. The dataset includes name, department, position salary and years of employment, but not gender.
```
import pandas as pd
df = pd.read_csv("data/unc_salaries.csv")
df.head(10)
```
A column with name-based gender assignment can be created by applying `d.get_gender` to the first name column.
```
df["Gender"] = df["First Name"].apply(d.get_gender)
df["Gender"].value_counts(normalize=True)
```
For this dataset, the majority of the names can be gendered, while less than ten percent of names are not in the dataset.
Selecting the rows in the dataframe where gender is unknown and the listing the values can be useful for inspecting cases and evaluating the gender-name assignment process.
```
cases = df["Gender"] == "unknown"
df[cases]["First Name"].values
```
My quick interpreation of this list is that it names that are certainly rare in the US, and some are likely transliterated using a non-common English spelling. The name with missing gender are not-random and the process of creating missingness is likely correlated with other variables of interest, such as salary. This might impact a full-analysis of gender patterns, but I'll ignore that in the preliminary analysis.
If you were conducted your analysis in another statistical package, you could export your dataframe with the new gender column.
```
df.to_csv("unc_salaries_gendered.csv")
```
You could also produce some summary statistics in your notebook. For example, the pandas `groupby` method can be used to estimate median salary by gender.
```
df.groupby("Gender")["Salary"].median()
```
Comparing the male and female-coded names, this shows evidence of a large salary gap based on gender. The "mostly" and unknown categories are in the middle, but interesting the androgynous names are associated with the lowest salaries.
Grouping by gender and position may be useful in understanding the mechanisms that produce the gender gap. I also focus on just the individuals with names that are coded as male or female.
```
subset = df["Gender"].isin(["male", "female"])
df[subset].groupby(["Position", "Gender"])["Salary"].median()
```
This summary dataframe can also be plotted, which clearly shows that the median salary for male Assistant Professors is higher than the median salary of the higher ranked female Associate Professors.
```
%matplotlib inline
df[subset].groupby(['Position','Gender'])['Salary'].median().plot(kind='barh');
```
Sometimes the first name will not be it's own field, but included as part of the name column that includes the full name. In that case, you will need to create a function that extracts the first name.
In this dataframe, the `name` column is the last name, followed by a comma, and then the first name and possibly a middle name or initial. A brief function extracts the first name,
```
def gender_name(name):
"""
Extracts and genders first name when the original name is formatted "Last, First M".
Assumes a gender.Detector named `d` is already declared.
"""
first_name = name.split(", ")[-1] # grab the slide after the comma
first_name = first_name.split(" ")[0] # remove middle name/initial
gender = d.get_gender(first_name)
return gender
```
This function can now be applied to the full name column.
```
df["Gender"] = df["Full Name"].apply(gender_name)
df["Gender"].value_counts()
```
The results are the same as original gender column.
|
github_jupyter
|
version 1.0.3
# + 
# **Text Analysis and Entity Resolution**
####Entity resolution is a common, yet difficult problem in data cleaning and integration. This lab will demonstrate how we can use Apache Spark to apply powerful and scalable text analysis techniques and perform entity resolution across two datasets of commercial products.
#### Entity Resolution, or "[Record linkage][wiki]" is the term used by statisticians, epidemiologists, and historians, among others, to describe the process of joining records from one data source with another that describe the same entity. Our terms with the same meaning include, "entity disambiguation/linking", duplicate detection", "deduplication", "record matching", "(reference) reconciliation", "object identification", "data/information integration", and "conflation".
#### Entity Resolution (ER) refers to the task of finding records in a dataset that refer to the same entity across different data sources (e.g., data files, books, websites, databases). ER is necessary when joining datasets based on entities that may or may not share a common identifier (e.g., database key, URI, National identification number), as may be the case due to differences in record shape, storage location, and/or curator style or preference. A dataset that has undergone ER may be referred to as being cross-linked.
[wiki]: https://en.wikipedia.org/wiki/Record_linkage
### Code
#### This assignment can be completed using basic Python, pySpark Transformations and actions, and the plotting library matplotlib. Other libraries are not allowed.
### Files
#### Data files for this assignment are from the [metric-learning](https://code.google.com/p/metric-learning/) project and can be found at:
`cs100/lab3`
#### The directory contains the following files:
* **Google.csv**, the Google Products dataset
* **Amazon.csv**, the Amazon dataset
* **Google_small.csv**, 200 records sampled from the Google data
* **Amazon_small.csv**, 200 records sampled from the Amazon data
* **Amazon_Google_perfectMapping.csv**, the "gold standard" mapping
* **stopwords.txt**, a list of common English words
#### Besides the complete data files, there are "sample" data files for each dataset - we will use these for **Part 1**. In addition, there is a "gold standard" file that contains all of the true mappings between entities in the two datasets. Every row in the gold standard file has a pair of record IDs (one Google, one Amazon) that belong to two record that describe the same thing in the real world. We will use the gold standard to evaluate our algorithms.
### **Part 0: Preliminaries**
#### We read in each of the files and create an RDD consisting of lines.
#### For each of the data files ("Google.csv", "Amazon.csv", and the samples), we want to parse the IDs out of each record. The IDs are the first column of the file (they are URLs for Google, and alphanumeric strings for Amazon). Omitting the headers, we load these data files into pair RDDs where the *mapping ID* is the key, and the value is a string consisting of the name/title, description, and manufacturer from the record.
#### The file format of an Amazon line is:
`"id","title","description","manufacturer","price"`
#### The file format of a Google line is:
`"id","name","description","manufacturer","price"`
```
import re
DATAFILE_PATTERN = '^(.+),"(.+)",(.*),(.*),(.*)'
def removeQuotes(s):
""" Remove quotation marks from an input string
Args:
s (str): input string that might have the quote "" characters
Returns:
str: a string without the quote characters
"""
return ''.join(i for i in s if i!='"')
def parseDatafileLine(datafileLine):
""" Parse a line of the data file using the specified regular expression pattern
Args:
datafileLine (str): input string that is a line from the data file
Returns:
str: a string parsed using the given regular expression and without the quote characters
"""
match = re.search(DATAFILE_PATTERN, datafileLine)
if match is None:
print 'Invalid datafile line: %s' % datafileLine
return (datafileLine, -1)
elif match.group(1) == '"id"':
print 'Header datafile line: %s' % datafileLine
return (datafileLine, 0)
else:
product = '%s %s %s' % (match.group(2), match.group(3), match.group(4))
return ((removeQuotes(match.group(1)), product), 1)
import sys
import os
from test_helper import Test
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab3')
GOOGLE_PATH = 'Google.csv'
GOOGLE_SMALL_PATH = 'Google_small.csv'
AMAZON_PATH = 'Amazon.csv'
AMAZON_SMALL_PATH = 'Amazon_small.csv'
GOLD_STANDARD_PATH = 'Amazon_Google_perfectMapping.csv'
STOPWORDS_PATH = 'stopwords.txt'
def parseData(filename):
""" Parse a data file
Args:
filename (str): input file name of the data file
Returns:
RDD: a RDD of parsed lines
"""
return (sc
.textFile(filename, 4, 0)
.map(parseDatafileLine)
.cache())
def loadData(path):
""" Load a data file
Args:
path (str): input file name of the data file
Returns:
RDD: a RDD of parsed valid lines
"""
filename = os.path.join(baseDir, inputPath, path)
raw = parseData(filename).cache()
failed = (raw
.filter(lambda s: s[1] == -1)
.map(lambda s: s[0]))
for line in failed.take(10):
print '%s - Invalid datafile line: %s' % (path, line)
valid = (raw
.filter(lambda s: s[1] == 1)
.map(lambda s: s[0])
.cache())
print '%s - Read %d lines, successfully parsed %d lines, failed to parse %d lines' % (path,
raw.count(),
valid.count(),
failed.count())
assert failed.count() == 0
assert raw.count() == (valid.count() + 1)
return valid
googleSmall = loadData(GOOGLE_SMALL_PATH)
google = loadData(GOOGLE_PATH)
amazonSmall = loadData(AMAZON_SMALL_PATH)
amazon = loadData(AMAZON_PATH)
```
#### Let's examine the lines that were just loaded in the two subset (small) files - one from Google and one from Amazon
```
for line in googleSmall.take(3):
print 'google: %s: %s\n' % (line[0], line[1])
for line in amazonSmall.take(3):
print 'amazon: %s: %s\n' % (line[0], line[1])
```
### **Part 1: ER as Text Similarity - Bags of Words**
#### A simple approach to entity resolution is to treat all records as strings and compute their similarity with a string distance function. In this part, we will build some components for performing bag-of-words text-analysis, and then use them to compute record similarity.
#### [Bag-of-words][bag-of-words] is a conceptually simple yet powerful approach to text analysis.
#### The idea is to treat strings, a.k.a. **documents**, as *unordered collections* of words, or **tokens**, i.e., as bags of words.
> #### **Note on terminology**: a "token" is the result of parsing the document down to the elements we consider "atomic" for the task at hand. Tokens can be things like words, numbers, acronyms, or other exotica like word-roots or fixed-length character strings.
> #### Bag of words techniques all apply to any sort of token, so when we say "bag-of-words" we really mean "bag-of-tokens," strictly speaking.
#### Tokens become the atomic unit of text comparison. If we want to compare two documents, we count how many tokens they share in common. If we want to search for documents with keyword queries (this is what Google does), then we turn the keywords into tokens and find documents that contain them. The power of this approach is that it makes string comparisons insensitive to small differences that probably do not affect meaning much, for example, punctuation and word order.
[bag-of-words]: https://en.wikipedia.org/wiki/Bag-of-words_model
### **1(a) Tokenize a String**
#### Implement the function `simpleTokenize(string)` that takes a string and returns a list of non-empty tokens in the string. `simpleTokenize` should split strings using the provided regular expression. Since we want to make token-matching case insensitive, make sure all tokens are turned lower-case. Give an interpretation, in natural language, of what the regular expression, `split_regex`, matches.
#### If you need help with Regular Expressions, try the site [regex101](https://regex101.com/) where you can interactively explore the results of applying different regular expressions to strings. *Note that \W includes the "_" character*. You should use [re.split()](https://docs.python.org/2/library/re.html#re.split) to perform the string split. Also, make sure you remove any empty tokens.
```
# TODO: Replace <FILL IN> with appropriate code
quickbrownfox = 'A quick brown fox jumps over the lazy dog.'
split_regex = r'\W+'
def simpleTokenize(string):
""" A simple implementation of input string tokenization
Args:
string (str): input string
Returns:
list: a list of tokens
"""
return [item for item in re.split(split_regex, string.lower()) if item]
print simpleTokenize(quickbrownfox) # Should give ['a', 'quick', 'brown', ... ]
# TEST Tokenize a String (1a)
Test.assertEquals(simpleTokenize(quickbrownfox),
['a','quick','brown','fox','jumps','over','the','lazy','dog'],
'simpleTokenize should handle sample text')
Test.assertEquals(simpleTokenize(' '), [], 'simpleTokenize should handle empty string')
Test.assertEquals(simpleTokenize('!!!!123A/456_B/789C.123A'), ['123a','456_b','789c','123a'],
'simpleTokenize should handle puntuations and lowercase result')
Test.assertEquals(simpleTokenize('fox fox'), ['fox', 'fox'],
'simpleTokenize should not remove duplicates')
```
### **(1b) Removing stopwords**
#### *[Stopwords][stopwords]* are common (English) words that do not contribute much to the content or meaning of a document (e.g., "the", "a", "is", "to", etc.). Stopwords add noise to bag-of-words comparisons, so they are usually excluded.
#### Using the included file "stopwords.txt", implement `tokenize`, an improved tokenizer that does not emit stopwords.
[stopwords]: https://en.wikipedia.org/wiki/Stop_words
```
# TODO: Replace <FILL IN> with appropriate code
stopfile = os.path.join(baseDir, inputPath, STOPWORDS_PATH)
stopwords = set(sc.textFile(stopfile).collect())
print 'These are the stopwords: %s' % stopwords
def tokenize(string):
""" An implementation of input string tokenization that excludes stopwords
Args:
string (str): input string
Returns:
list: a list of tokens without stopwords
"""
return [token for token in simpleTokenize(string) if token not in stopwords]
print tokenize(quickbrownfox) # Should give ['quick', 'brown', ... ]
# TEST Removing stopwords (1b)
Test.assertEquals(tokenize("Why a the?"), [], 'tokenize should remove all stopwords')
Test.assertEquals(tokenize("Being at the_?"), ['the_'], 'tokenize should handle non-stopwords')
Test.assertEquals(tokenize(quickbrownfox), ['quick','brown','fox','jumps','lazy','dog'],
'tokenize should handle sample text')
```
### **(1c) Tokenizing the small datasets**
#### Now let's tokenize the two *small* datasets. For each ID in a dataset, `tokenize` the values, and then count the total number of tokens.
#### How many tokens, total, are there in the two datasets?
```
# TODO: Replace <FILL IN> with appropriate code
amazonRecToToken = amazonSmall.map(lambda x: (x[0], tokenize(x[1])))
googleRecToToken = googleSmall.map(lambda x: (x[0], tokenize(x[1])))
def countTokens(vendorRDD):
""" Count and return the number of tokens
Args:
vendorRDD (RDD of (recordId, tokenizedValue)): Pair tuple of record ID to tokenized output
Returns:
count: count of all tokens
"""
return vendorRDD.map(lambda x: len(x[1])).sum()
totalTokens = countTokens(amazonRecToToken) + countTokens(googleRecToToken)
print 'There are %s tokens in the combined datasets' % totalTokens
# TEST Tokenizing the small datasets (1c)
Test.assertEquals(totalTokens, 22520, 'incorrect totalTokens')
```
### **(1d) Amazon record with the most tokens**
#### Which Amazon record has the biggest number of tokens?
#### In other words, you want to sort the records and get the one with the largest count of tokens.
```
# TODO: Replace <FILL IN> with appropriate code
def findBiggestRecord(vendorRDD):
""" Find and return the record with the largest number of tokens
Args:
vendorRDD (RDD of (recordId, tokens)): input Pair Tuple of record ID and tokens
Returns:
list: a list of 1 Pair Tuple of record ID and tokens
"""
return vendorRDD.takeOrdered(1, lambda x: -len(x[1]))
biggestRecordAmazon = findBiggestRecord(amazonRecToToken)
print 'The Amazon record with ID "%s" has the most tokens (%s)' % (biggestRecordAmazon[0][0],
len(biggestRecordAmazon[0][1]))
# TEST Amazon record with the most tokens (1d)
Test.assertEquals(biggestRecordAmazon[0][0], 'b000o24l3q', 'incorrect biggestRecordAmazon')
Test.assertEquals(len(biggestRecordAmazon[0][1]), 1547, 'incorrect len for biggestRecordAmazon')
```
### **Part 2: ER as Text Similarity - Weighted Bag-of-Words using TF-IDF**
#### Bag-of-words comparisons are not very good when all tokens are treated the same: some tokens are more important than others. Weights give us a way to specify which tokens to favor. With weights, when we compare documents, instead of counting common tokens, we sum up the weights of common tokens. A good heuristic for assigning weights is called "Term-Frequency/Inverse-Document-Frequency," or [TF-IDF][tfidf] for short.
#### **TF**
#### TF rewards tokens that appear many times in the same document. It is computed as the frequency of a token in a document, that is, if document *d* contains 100 tokens and token *t* appears in *d* 5 times, then the TF weight of *t* in *d* is *5/100 = 1/20*. The intuition for TF is that if a word occurs often in a document, then it is more important to the meaning of the document.
#### **IDF**
#### IDF rewards tokens that are rare overall in a dataset. The intuition is that it is more significant if two documents share a rare word than a common one. IDF weight for a token, *t*, in a set of documents, *U*, is computed as follows:
* #### Let *N* be the total number of documents in *U*
* #### Find *n(t)*, the number of documents in *U* that contain *t*
* #### Then *IDF(t) = N/n(t)*.
#### Note that *n(t)/N* is the frequency of *t* in *U*, and *N/n(t)* is the inverse frequency.
> #### **Note on terminology**: Sometimes token weights depend on the document the token belongs to, that is, the same token may have a different weight when it's found in different documents. We call these weights *local* weights. TF is an example of a local weight, because it depends on the length of the source. On the other hand, some token weights only depend on the token, and are the same everywhere that token is found. We call these weights *global*, and IDF is one such weight.
#### **TF-IDF**
#### Finally, to bring it all together, the total TF-IDF weight for a token in a document is the product of its TF and IDF weights.
[tfidf]: https://en.wikipedia.org/wiki/Tf%E2%80%93idf
### **(2a) Implement a TF function**
#### Implement `tf(tokens)` that takes a list of tokens and returns a Python [dictionary](https://docs.python.org/2/tutorial/datastructures.html#dictionaries) mapping tokens to TF weights.
#### The steps your function should perform are:
* #### Create an empty Python dictionary
* #### For each of the tokens in the input `tokens` list, count 1 for each occurance and add the token to the dictionary
* #### For each of the tokens in the dictionary, divide the token's count by the total number of tokens in the input `tokens` list
```
# TODO: Replace <FILL IN> with appropriate code
from collections import Counter
def tf(tokens):
""" Compute TF
Args:
tokens (list of str): input list of tokens from tokenize
Returns:
dictionary: a dictionary of tokens to its TF values
"""
count = len(tokens)
word_freq = Counter(tokens)
return {key: float(value)/count for key, value in word_freq.items()}
print tf(tokenize(quickbrownfox)) # Should give { 'quick': 0.1666 ... }
# TEST Implement a TF function (2a)
tf_test = tf(tokenize(quickbrownfox))
Test.assertEquals(tf_test, {'brown': 0.16666666666666666, 'lazy': 0.16666666666666666,
'jumps': 0.16666666666666666, 'fox': 0.16666666666666666,
'dog': 0.16666666666666666, 'quick': 0.16666666666666666},
'incorrect result for tf on sample text')
tf_test2 = tf(tokenize('one_ one_ two!'))
Test.assertEquals(tf_test2, {'one_': 0.6666666666666666, 'two': 0.3333333333333333},
'incorrect result for tf test')
```
### **(2b) Create a corpus**
#### Create a pair RDD called `corpusRDD`, consisting of a combination of the two small datasets, `amazonRecToToken` and `googleRecToToken`. Each element of the `corpusRDD` should be a pair consisting of a key from one of the small datasets (ID or URL) and the value is the associated value for that key from the small datasets.
```
# TODO: Replace <FILL IN> with appropriate code
corpusRDD = amazonRecToToken.union(googleRecToToken)
# TEST Create a corpus (2b)
Test.assertEquals(corpusRDD.count(), 400, 'incorrect corpusRDD.count()')
```
### **(2c) Implement an IDFs function**
#### Implement `idfs` that assigns an IDF weight to every unique token in an RDD called `corpus`. The function should return an pair RDD where the `key` is the unique token and value is the IDF weight for the token.
#### Recall that the IDF weight for a token, *t*, in a set of documents, *U*, is computed as follows:
* #### Let *N* be the total number of documents in *U*.
* #### Find *n(t)*, the number of documents in *U* that contain *t*.
* #### Then *IDF(t) = N/n(t)*.
#### The steps your function should perform are:
* #### Calculate *N*. Think about how you can calculate *N* from the input RDD.
* #### Create an RDD (*not a pair RDD*) containing the unique tokens from each document in the input `corpus`. For each document, you should only include a token once, *even if it appears multiple times in that document.*
* #### For each of the unique tokens, count how many times it appears in the document and then compute the IDF for that token: *N/n(t)*
#### Use your `idfs` to compute the IDF weights for all tokens in `corpusRDD` (the combined small datasets).
#### How many unique tokens are there?
```
# TODO: Replace <FILL IN> with appropriate code
def idfs(corpus):
""" Compute IDF
Args:
corpus (RDD): input corpus
Returns:
RDD: a RDD of (token, IDF value)
"""
N = corpus.count()
uniqueTokens = corpus.flatMap(lambda x: list(set(x[1])))
tokenCountPairTuple = uniqueTokens.map(lambda x: (x, 1))
tokenSumPairTuple = tokenCountPairTuple.reduceByKey(lambda a, b: a + b)
return tokenSumPairTuple.map(lambda x: (x[0], float(N)/x[1]))
idfsSmall = idfs(amazonRecToToken.union(googleRecToToken))
uniqueTokenCount = idfsSmall.count()
print 'There are %s unique tokens in the small datasets.' % uniqueTokenCount
# TEST Implement an IDFs function (2c)
Test.assertEquals(uniqueTokenCount, 4772, 'incorrect uniqueTokenCount')
tokenSmallestIdf = idfsSmall.takeOrdered(1, lambda s: s[1])[0]
Test.assertEquals(tokenSmallestIdf[0], 'software', 'incorrect smallest IDF token')
Test.assertTrue(abs(tokenSmallestIdf[1] - 4.25531914894) < 0.0000000001,
'incorrect smallest IDF value')
```
### **(2d) Tokens with the smallest IDF**
#### Print out the 11 tokens with the smallest IDF in the combined small dataset.
```
smallIDFTokens = idfsSmall.takeOrdered(11, lambda s: s[1])
print smallIDFTokens
```
### **(2e) IDF Histogram**
#### Plot a histogram of IDF values. Be sure to use appropriate scaling and bucketing for the data.
#### First plot the histogram using `matplotlib`
```
import matplotlib.pyplot as plt
small_idf_values = idfsSmall.map(lambda s: s[1]).collect()
fig = plt.figure(figsize=(8,3))
plt.hist(small_idf_values, 50, log=True)
pass
```
### **(2f) Implement a TF-IDF function**
#### Use your `tf` function to implement a `tfidf(tokens, idfs)` function that takes a list of tokens from a document and a Python dictionary of IDF weights and returns a Python dictionary mapping individual tokens to total TF-IDF weights.
#### The steps your function should perform are:
* #### Calculate the token frequencies (TF) for `tokens`
* #### Create a Python dictionary where each token maps to the token's frequency times the token's IDF weight
#### Use your `tfidf` function to compute the weights of Amazon product record 'b000hkgj8k'. To do this, we need to extract the record for the token from the tokenized small Amazon dataset and we need to convert the IDFs for the small dataset into a Python dictionary. We can do the first part, by using a `filter()` transformation to extract the matching record and a `collect()` action to return the value to the driver. For the second part, we use the [`collectAsMap()` action](http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.collectAsMap) to return the IDFs to the driver as a Python dictionary.
```
# TODO: Replace <FILL IN> with appropriate code
def tfidf(tokens, idfs):
""" Compute TF-IDF
Args:
tokens (list of str): input list of tokens from tokenize
idfs (dictionary): record to IDF value
Returns:
dictionary: a dictionary of records to TF-IDF values
"""
tfs = tf(tokens)
tfIdfDict = dict((k, tfs[k] * idfs[k]) for k in tokens if k in idfs)
return tfIdfDict
recb000hkgj8k = amazonRecToToken.filter(lambda x: x[0] == 'b000hkgj8k').collect()[0][1]
idfsSmallWeights = idfsSmall.collectAsMap()
rec_b000hkgj8k_weights = tfidf(recb000hkgj8k, idfsSmallWeights)
print 'Amazon record "b000hkgj8k" has tokens and weights:\n%s' % rec_b000hkgj8k_weights
# TEST Implement a TF-IDF function (2f)
Test.assertEquals(rec_b000hkgj8k_weights,
{'autocad': 33.33333333333333, 'autodesk': 8.333333333333332,
'courseware': 66.66666666666666, 'psg': 33.33333333333333,
'2007': 3.5087719298245617, 'customizing': 16.666666666666664,
'interface': 3.0303030303030303}, 'incorrect rec_b000hkgj8k_weights')
```
### **Part 3: ER as Text Similarity - Cosine Similarity**
#### Now we are ready to do text comparisons in a formal way. The metric of string distance we will use is called **[cosine similarity][cosine]**. We will treat each document as a vector in some high dimensional space. Then, to compare two documents we compute the cosine of the angle between their two document vectors. This is *much* easier than it sounds.
#### The first question to answer is how do we represent documents as vectors? The answer is familiar: bag-of-words! We treat each unique token as a dimension, and treat token weights as magnitudes in their respective token dimensions. For example, suppose we use simple counts as weights, and we want to interpret the string "Hello, world! Goodbye, world!" as a vector. Then in the "hello" and "goodbye" dimensions the vector has value 1, in the "world" dimension it has value 2, and it is zero in all other dimensions.
#### The next question is: given two vectors how do we find the cosine of the angle between them? Recall the formula for the dot product of two vectors:
#### $$ a \cdot b = \| a \| \| b \| \cos \theta $$
#### Here $ a \cdot b = \sum a_i b_i $ is the ordinary dot product of two vectors, and $ \|a\| = \sqrt{ \sum a_i^2 } $ is the norm of $ a $.
#### We can rearrange terms and solve for the cosine to find it is simply the normalized dot product of the vectors. With our vector model, the dot product and norm computations are simple functions of the bag-of-words document representations, so we now have a formal way to compute similarity:
#### $$ similarity = \cos \theta = \frac{a \cdot b}{\|a\| \|b\|} = \frac{\sum a_i b_i}{\sqrt{\sum a_i^2} \sqrt{\sum b_i^2}} $$
#### Setting aside the algebra, the geometric interpretation is more intuitive. The angle between two document vectors is small if they share many tokens in common, because they are pointing in roughly the same direction. For that case, the cosine of the angle will be large. Otherwise, if the angle is large (and they have few words in common), the cosine is small. Therefore, cosine similarity scales proportionally with our intuitive sense of similarity.
[cosine]: https://en.wikipedia.org/wiki/Cosine_similarity
### **(3a) Implement the components of a `cosineSimilarity` function**
#### Implement the components of a `cosineSimilarity` function.
#### Use the `tokenize` and `tfidf` functions, and the IDF weights from Part 2 for extracting tokens and assigning them weights.
#### The steps you should perform are:
* #### Define a function `dotprod` that takes two Python dictionaries and produces the dot product of them, where the dot product is defined as the sum of the product of values for tokens that appear in *both* dictionaries
* #### Define a function `norm` that returns the square root of the dot product of a dictionary and itself
* #### Define a function `cossim` that returns the dot product of two dictionaries divided by the norm of the first dictionary and then by the norm of the second dictionary
```
# TODO: Replace <FILL IN> with appropriate code
import math
def dotprod(a, b):
""" Compute dot product
Args:
a (dictionary): first dictionary of record to value
b (dictionary): second dictionary of record to value
Returns:
dotProd: result of the dot product with the two input dictionaries
"""
return sum(a[k] * b[k] for k in a.keys() if k in b.keys())
def norm(a):
""" Compute square root of the dot product
Args:
a (dictionary): a dictionary of record to value
Returns:
norm: a dictionary of tokens to its TF values
"""
return math.sqrt(dotprod(a,a))
def cossim(a, b):
""" Compute cosine similarity
Args:
a (dictionary): first dictionary of record to value
b (dictionary): second dictionary of record to value
Returns:
cossim: dot product of two dictionaries divided by the norm of the first dictionary and
then by the norm of the second dictionary
"""
return dotprod(a,b)/(norm(a) * norm(b))
testVec1 = {'foo': 2, 'bar': 3, 'baz': 5 }
testVec2 = {'foo': 1, 'bar': 0, 'baz': 20 }
dp = dotprod(testVec1, testVec2)
nm = norm(testVec1)
print dp, nm
# TEST Implement the components of a cosineSimilarity function (3a)
Test.assertEquals(dp, 102, 'incorrect dp')
Test.assertTrue(abs(nm - 6.16441400297) < 0.0000001, 'incorrrect nm')
```
### **(3b) Implement a `cosineSimilarity` function**
#### Implement a `cosineSimilarity(string1, string2, idfsDictionary)` function that takes two strings and a dictionary of IDF weights, and computes their cosine similarity in the context of some global IDF weights.
#### The steps you should perform are:
* #### Apply your `tfidf` function to the tokenized first and second strings, using the dictionary of IDF weights
* #### Compute and return your `cossim` function applied to the results of the two `tfidf` functions
```
# TODO: Replace <FILL IN> with appropriate code
def cosineSimilarity(string1, string2, idfsDictionary):
""" Compute cosine similarity between two strings
Args:
string1 (str): first string
string2 (str): second string
idfsDictionary (dictionary): a dictionary of IDF values
Returns:
cossim: cosine similarity value
"""
w1 = tfidf(tokenize(string1), idfsDictionary)
w2 = tfidf(tokenize(string2), idfsDictionary)
return cossim(w1, w2)
cossimAdobe = cosineSimilarity('Adobe Photoshop',
'Adobe Illustrator',
idfsSmallWeights)
print cossimAdobe
# TEST Implement a cosineSimilarity function (3b)
Test.assertTrue(abs(cossimAdobe - 0.0577243382163) < 0.0000001, 'incorrect cossimAdobe')
```
### **(3c) Perform Entity Resolution**
#### Now we can finally do some entity resolution!
#### For *every* product record in the small Google dataset, use your `cosineSimilarity` function to compute its similarity to every record in the small Amazon dataset. Then, build a dictionary mapping `(Google URL, Amazon ID)` tuples to similarity scores between 0 and 1.
#### We'll do this computation two different ways, first we'll do it without a broadcast variable, and then we'll use a broadcast variable
#### The steps you should perform are:
* #### Create an RDD that is a combination of the small Google and small Amazon datasets that has as elements all pairs of elements (a, b) where a is in self and b is in other. The result will be an RDD of the form: `[ ((Google URL1, Google String1), (Amazon ID1, Amazon String1)), ((Google URL1, Google String1), (Amazon ID2, Amazon String2)), ((Google URL2, Google String2), (Amazon ID1, Amazon String1)), ... ]`
* #### Define a worker function that given an element from the combination RDD computes the cosineSimlarity for the two records in the element
* #### Apply the worker function to every element in the RDD
#### Now, compute the similarity between Amazon record `b000o24l3q` and Google record `http://www.google.com/base/feeds/snippets/17242822440574356561`.
```
# TODO: Replace <FILL IN> with appropriate code
crossSmall = (googleSmall
.cartesian(amazonSmall)
.cache())
def computeSimilarity(record):
""" Compute similarity on a combination record
Args:
record: a pair, (google record, amazon record)
Returns:
pair: a pair, (google URL, amazon ID, cosine similarity value)
"""
googleRec = record[0]
amazonRec = record[1]
googleURL = googleRec[0]
amazonID = amazonRec[0]
googleValue = googleRec[1]
amazonValue = amazonRec[1]
cs = cosineSimilarity(googleValue, amazonValue, idfsSmallWeights)
return (googleURL, amazonID, cs)
similarities = (crossSmall
.map(computeSimilarity)
.cache())
def similar(amazonID, googleURL):
""" Return similarity value
Args:
amazonID: amazon ID
googleURL: google URL
Returns:
similar: cosine similarity value
"""
return (similarities
.filter(lambda record: (record[0] == googleURL and record[1] == amazonID))
.collect()[0][2])
similarityAmazonGoogle = similar('b000o24l3q', 'http://www.google.com/base/feeds/snippets/17242822440574356561')
print 'Requested similarity is %s.' % similarityAmazonGoogle
# TEST Perform Entity Resolution (3c)
Test.assertTrue(abs(similarityAmazonGoogle - 0.000303171940451) < 0.0000001,
'incorrect similarityAmazonGoogle')
```
### **(3d) Perform Entity Resolution with Broadcast Variables**
#### The solution in (3c) works well for small datasets, but it requires Spark to (automatically) send the `idfsSmallWeights` variable to all the workers. If we didn't `cache()` similarities, then it might have to be recreated if we run `similar()` multiple times. This would cause Spark to send `idfsSmallWeights` every time.
#### Instead, we can use a broadcast variable - we define the broadcast variable in the driver and then we can refer to it in each worker. Spark saves the broadcast variable at each worker, so it is only sent once.
#### The steps you should perform are:
* #### Define a `computeSimilarityBroadcast` function that given an element from the combination RDD computes the cosine simlarity for the two records in the element. This will be the same as the worker function `computeSimilarity` in (3c) except that it uses a broadcast variable.
* #### Apply the worker function to every element in the RDD
#### Again, compute the similarity between Amazon record `b000o24l3q` and Google record `http://www.google.com/base/feeds/snippets/17242822440574356561`.
```
# TODO: Replace <FILL IN> with appropriate code
def computeSimilarityBroadcast(record):
""" Compute similarity on a combination record, using Broadcast variable
Args:
record: a pair, (google record, amazon record)
Returns:
pair: a pair, (google URL, amazon ID, cosine similarity value)
"""
googleRec = record[0]
amazonRec = record[1]
googleURL = googleRec[0]
amazonID = amazonRec[0]
googleValue = googleRec[1]
amazonValue = amazonRec[1]
cs = cosineSimilarity(googleValue, amazonValue, idfsSmallBroadcast.value)
return (googleURL, amazonID, cs)
idfsSmallBroadcast = sc.broadcast(idfsSmallWeights)
similaritiesBroadcast = (crossSmall
.map(computeSimilarity)
.cache())
def similarBroadcast(amazonID, googleURL):
""" Return similarity value, computed using Broadcast variable
Args:
amazonID: amazon ID
googleURL: google URL
Returns:
similar: cosine similarity value
"""
return (similaritiesBroadcast
.filter(lambda record: (record[0] == googleURL and record[1] == amazonID))
.collect()[0][2])
similarityAmazonGoogleBroadcast = similarBroadcast('b000o24l3q', 'http://www.google.com/base/feeds/snippets/17242822440574356561')
print 'Requested similarity is %s.' % similarityAmazonGoogleBroadcast
# TEST Perform Entity Resolution with Broadcast Variables (3d)
from pyspark import Broadcast
Test.assertTrue(isinstance(idfsSmallBroadcast, Broadcast), 'incorrect idfsSmallBroadcast')
Test.assertEquals(len(idfsSmallBroadcast.value), 4772, 'incorrect idfsSmallBroadcast value')
Test.assertTrue(abs(similarityAmazonGoogleBroadcast - 0.000303171940451) < 0.0000001,
'incorrect similarityAmazonGoogle')
```
### **(3e) Perform a Gold Standard evaluation**
#### First, we'll load the "gold standard" data and use it to answer several questions. We read and parse the Gold Standard data, where the format of each line is "Amazon Product ID","Google URL". The resulting RDD has elements of the form ("AmazonID GoogleURL", 'gold')
```
GOLDFILE_PATTERN = '^(.+),(.+)'
# Parse each line of a data file useing the specified regular expression pattern
def parse_goldfile_line(goldfile_line):
""" Parse a line from the 'golden standard' data file
Args:
goldfile_line: a line of data
Returns:
pair: ((key, 'gold', 1 if successful or else 0))
"""
match = re.search(GOLDFILE_PATTERN, goldfile_line)
if match is None:
print 'Invalid goldfile line: %s' % goldfile_line
return (goldfile_line, -1)
elif match.group(1) == '"idAmazon"':
print 'Header datafile line: %s' % goldfile_line
return (goldfile_line, 0)
else:
key = '%s %s' % (removeQuotes(match.group(1)), removeQuotes(match.group(2)))
return ((key, 'gold'), 1)
goldfile = os.path.join(baseDir, inputPath, GOLD_STANDARD_PATH)
gsRaw = (sc
.textFile(goldfile)
.map(parse_goldfile_line)
.cache())
gsFailed = (gsRaw
.filter(lambda s: s[1] == -1)
.map(lambda s: s[0]))
for line in gsFailed.take(10):
print 'Invalid goldfile line: %s' % line
goldStandard = (gsRaw
.filter(lambda s: s[1] == 1)
.map(lambda s: s[0])
.cache())
print 'Read %d lines, successfully parsed %d lines, failed to parse %d lines' % (gsRaw.count(),
goldStandard.count(),
gsFailed.count())
assert (gsFailed.count() == 0)
assert (gsRaw.count() == (goldStandard.count() + 1))
```
### Using the "gold standard" data we can answer the following questions:
* #### How many true duplicate pairs are there in the small datasets?
* #### What is the average similarity score for true duplicates?
* #### What about for non-duplicates?
#### The steps you should perform are:
* #### Create a new `sims` RDD from the `similaritiesBroadcast` RDD, where each element consists of a pair of the form ("AmazonID GoogleURL", cosineSimilarityScore). An example entry from `sims` is: ('b000bi7uqs http://www.google.com/base/feeds/snippets/18403148885652932189', 0.40202896125621296)
* #### Combine the `sims` RDD with the `goldStandard` RDD by creating a new `trueDupsRDD` RDD that has the just the cosine similarity scores for those "AmazonID GoogleURL" pairs that appear in both the `sims` RDD and `goldStandard` RDD. Hint: you can do this using the join() transformation.
* #### Count the number of true duplicate pairs in the `trueDupsRDD` dataset
* #### Compute the average similarity score for true duplicates in the `trueDupsRDD` datasets. Remember to use `float` for calculation
* #### Create a new `nonDupsRDD` RDD that has the just the cosine similarity scores for those "AmazonID GoogleURL" pairs from the `similaritiesBroadcast` RDD that **do not** appear in both the *sims* RDD and gold standard RDD.
* #### Compute the average similarity score for non-duplicates in the last datasets. Remember to use `float` for calculation
```
# TODO: Replace <FILL IN> with appropriate code
sims = similaritiesBroadcast.map(lambda x: (x[1] + " " + x[0], x[2]))
trueDupsRDD = (sims
.join(goldStandard).map(lambda x: (x[0], x[1][0])))
trueDupsCount = trueDupsRDD.count()
avgSimDups = trueDupsRDD.map(lambda x: x[1]).sum()/float(trueDupsCount)
nonDupsRDD = (sims
.leftOuterJoin(goldStandard).filter(lambda x: x[1][1] == None).map(lambda x: (x[0], x[1][0])))
avgSimNon = nonDupsRDD.map(lambda x: x[1]).sum()/float(nonDupsRDD.count())
print 'There are %s true duplicates.' % trueDupsCount
print 'The average similarity of true duplicates is %s.' % avgSimDups
print 'And for non duplicates, it is %s.' % avgSimNon
# TEST Perform a Gold Standard evaluation (3e)
Test.assertEquals(trueDupsCount, 146, 'incorrect trueDupsCount')
Test.assertTrue(abs(avgSimDups - 0.264332573435) < 0.0000001, 'incorrect avgSimDups')
Test.assertTrue(abs(avgSimNon - 0.00123476304656) < 0.0000001, 'incorrect avgSimNon')
```
### **Part 4: Scalable ER**
#### In the previous parts, we built a text similarity function and used it for small scale entity resolution. Our implementation is limited by its quadratic run time complexity, and is not practical for even modestly sized datasets. In this part, we will implement a more scalable algorithm and use it to do entity resolution on the full dataset.
### Inverted Indices
#### To improve our ER algorithm from the earlier parts, we should begin by analyzing its running time. In particular, the algorithm above is quadratic in two ways. First, we did a lot of redundant computation of tokens and weights, since each record was reprocessed every time it was compared. Second, we made quadratically many token comparisons between records.
#### The first source of quadratic overhead can be eliminated with precomputation and look-up tables, but the second source is a little more tricky. In the worst case, every token in every record in one dataset exists in every record in the other dataset, and therefore every token makes a non-zero contribution to the cosine similarity. In this case, token comparison is unavoidably quadratic.
#### But in reality most records have nothing (or very little) in common. Moreover, it is typical for a record in one dataset to have at most one duplicate record in the other dataset (this is the case assuming each dataset has been de-duplicated against itself). In this case, the output is linear in the size of the input and we can hope to achieve linear running time.
#### An [**inverted index**](https://en.wikipedia.org/wiki/Inverted_index) is a data structure that will allow us to avoid making quadratically many token comparisons. It maps each token in the dataset to the list of documents that contain the token. So, instead of comparing, record by record, each token to every other token to see if they match, we will use inverted indices to *look up* records that match on a particular token.
> #### **Note on terminology**: In text search, a *forward* index maps documents in a dataset to the tokens they contain. An *inverted* index supports the inverse mapping.
> #### **Note**: For this section, use the complete Google and Amazon datasets, not the samples
### **(4a) Tokenize the full dataset**
#### Tokenize each of the two full datasets for Google and Amazon.
```
# TODO: Replace <FILL IN> with appropriate code
amazonFullRecToToken = amazon.map(lambda x: (x[0], tokenize(x[1])))
googleFullRecToToken = google.map(lambda x: (x[0], tokenize(x[1])))
print 'Amazon full dataset is %s products, Google full dataset is %s products' % (amazonFullRecToToken.count(),
googleFullRecToToken.count())
# TEST Tokenize the full dataset (4a)
Test.assertEquals(amazonFullRecToToken.count(), 1363, 'incorrect amazonFullRecToToken.count()')
Test.assertEquals(googleFullRecToToken.count(), 3226, 'incorrect googleFullRecToToken.count()')
```
### **(4b) Compute IDFs and TF-IDFs for the full datasets**
#### We will reuse your code from above to compute IDF weights for the complete combined datasets.
#### The steps you should perform are:
* #### Create a new `fullCorpusRDD` that contains the tokens from the full Amazon and Google datasets.
* #### Apply your `idfs` function to the `fullCorpusRDD`
* #### Create a broadcast variable containing a dictionary of the IDF weights for the full dataset.
* #### For each of the Amazon and Google full datasets, create weight RDDs that map IDs/URLs to TF-IDF weighted token vectors.
```
# TODO: Replace <FILL IN> with appropriate code
fullCorpusRDD = amazonFullRecToToken.union(googleFullRecToToken)
idfsFull = idfs(fullCorpusRDD)
idfsFullCount = idfsFull.count()
print 'There are %s unique tokens in the full datasets.' % idfsFullCount
# Recompute IDFs for full dataset
idfsFullWeights = idfsFull.collectAsMap()
idfsFullBroadcast = sc.broadcast(idfsFullWeights)
# Pre-compute TF-IDF weights. Build mappings from record ID weight vector.
amazonWeightsRDD = amazonFullRecToToken.map(lambda x: (x[0], tfidf(x[1],idfsFullBroadcast.value)))
googleWeightsRDD = googleFullRecToToken.map(lambda x: (x[0], tfidf(x[1],idfsFullBroadcast.value)))
print 'There are %s Amazon weights and %s Google weights.' % (amazonWeightsRDD.count(),
googleWeightsRDD.count())
# TEST Compute IDFs and TF-IDFs for the full datasets (4b)
Test.assertEquals(idfsFullCount, 17078, 'incorrect idfsFullCount')
Test.assertEquals(amazonWeightsRDD.count(), 1363, 'incorrect amazonWeightsRDD.count()')
Test.assertEquals(googleWeightsRDD.count(), 3226, 'incorrect googleWeightsRDD.count()')
```
### **(4c) Compute Norms for the weights from the full datasets**
#### We will reuse your code from above to compute norms of the IDF weights for the complete combined dataset.
#### The steps you should perform are:
* #### Create two collections, one for each of the full Amazon and Google datasets, where IDs/URLs map to the norm of the associated TF-IDF weighted token vectors.
* #### Convert each collection into a broadcast variable, containing a dictionary of the norm of IDF weights for the full dataset
```
# TODO: Replace <FILL IN> with appropriate code
amazonNorms = amazonWeightsRDD.map(lambda x: (x[0], norm(x[1])))
amazonNormsBroadcast = sc.broadcast(amazonNorms.collectAsMap())
googleNorms = googleWeightsRDD.map(lambda x: (x[0], norm(x[1])))
googleNormsBroadcast = sc.broadcast(googleNorms.collectAsMap())
# TEST Compute Norms for the weights from the full datasets (4c)
Test.assertTrue(isinstance(amazonNormsBroadcast, Broadcast), 'incorrect amazonNormsBroadcast')
Test.assertEquals(len(amazonNormsBroadcast.value), 1363, 'incorrect amazonNormsBroadcast.value')
Test.assertTrue(isinstance(googleNormsBroadcast, Broadcast), 'incorrect googleNormsBroadcast')
Test.assertEquals(len(googleNormsBroadcast.value), 3226, 'incorrect googleNormsBroadcast.value')
```
### **(4d) Create inverted indicies from the full datasets**
#### Build inverted indices of both data sources.
#### The steps you should perform are:
* #### Create an invert function that given a pair of (ID/URL, TF-IDF weighted token vector), returns a list of pairs of (token, ID/URL). Recall that the TF-IDF weighted token vector is a Python dictionary with keys that are tokens and values that are weights.
* #### Use your invert function to convert the full Amazon and Google TF-IDF weighted token vector datasets into two RDDs where each element is a pair of a token and an ID/URL that contain that token. These are inverted indicies.
```
# TODO: Replace <FILL IN> with appropriate code
def invert(record):
""" Invert (ID, tokens) to a list of (token, ID)
Args:
record: a pair, (ID, token vector)
Returns:
pairs: a list of pairs of token to ID
"""
pairs = [(token, record[0]) for token in record[1]]
return pairs
amazonInvPairsRDD = (amazonWeightsRDD
.flatMap(invert)
.cache())
googleInvPairsRDD = (googleWeightsRDD
.flatMap(invert)
.cache())
print 'There are %s Amazon inverted pairs and %s Google inverted pairs.' % (amazonInvPairsRDD.count(),
googleInvPairsRDD.count())
# TEST Create inverted indicies from the full datasets (4d)
invertedPair = invert((1, {'foo': 2}))
Test.assertEquals(invertedPair[0][1], 1, 'incorrect invert result')
Test.assertEquals(amazonInvPairsRDD.count(), 111387, 'incorrect amazonInvPairsRDD.count()')
Test.assertEquals(googleInvPairsRDD.count(), 77678, 'incorrect googleInvPairsRDD.count()')
```
### **(4e) Identify common tokens from the full dataset**
#### We are now in position to efficiently perform ER on the full datasets. Implement the following algorithm to build an RDD that maps a pair of (ID, URL) to a list of tokens they share in common:
* #### Using the two inverted indicies (RDDs where each element is a pair of a token and an ID or URL that contains that token), create a new RDD that contains only tokens that appear in both datasets. This will yield an RDD of pairs of (token, iterable(ID, URL)).
* #### We need a mapping from (ID, URL) to token, so create a function that will swap the elements of the RDD you just created to create this new RDD consisting of ((ID, URL), token) pairs.
* #### Finally, create an RDD consisting of pairs mapping (ID, URL) to all the tokens the pair shares in common
```
# TODO: Replace <FILL IN> with appropriate code
def swap(record):
""" Swap (token, (ID, URL)) to ((ID, URL), token)
Args:
record: a pair, (token, (ID, URL))
Returns:
pair: ((ID, URL), token)
"""
token = record[0]
keys = record[1]
return (keys, token)
commonTokens = (amazonInvPairsRDD
.join(googleInvPairsRDD).map(swap).groupByKey()
.cache())
print 'Found %d common tokens' % commonTokens.count()
# TEST Identify common tokens from the full dataset (4e)
Test.assertEquals(commonTokens.count(), 2441100, 'incorrect commonTokens.count()')
```
### **(4f) Identify common tokens from the full dataset**
#### Use the data structures from parts **(4a)** and **(4e)** to build a dictionary to map record pairs to cosine similarity scores.
#### The steps you should perform are:
* #### Create two broadcast dictionaries from the amazonWeights and googleWeights RDDs
* #### Create a `fastCosinesSimilarity` function that takes in a record consisting of the pair ((Amazon ID, Google URL), tokens list) and computes the sum for each of the tokens in the token list of the products of the Amazon weight for the token times the Google weight for the token. The sum should then be divided by the norm for the Google URL and then divided by the norm for the Amazon ID. The function should return this value in a pair with the key being the (Amazon ID, Google URL). *Make sure you use broadcast variables you created for both the weights and norms*
* #### Apply your `fastCosinesSimilarity` function to the common tokens from the full dataset
```
# TODO: Replace <FILL IN> with appropriate code
amazonWeightsBroadcast = sc.broadcast(amazonWeightsRDD.collectAsMap())
googleWeightsBroadcast = sc.broadcast(googleWeightsRDD.collectAsMap())
def fastCosineSimilarity(record):
""" Compute Cosine Similarity using Broadcast variables
Args:
record: ((ID, URL), token)
Returns:
pair: ((ID, URL), cosine similarity value)
"""
amazonRec = record[0][0]
googleRec = record[0][1]
tokens = record[1]
s = sum(amazonWeightsBroadcast.value[amazonRec][i] * googleWeightsBroadcast.value[googleRec][i] for i in tokens)
value = s/(amazonNormsBroadcast.value[amazonRec] * googleNormsBroadcast.value[googleRec])
key = (amazonRec, googleRec)
return (key, value)
similaritiesFullRDD = (commonTokens
.map(fastCosineSimilarity)
.cache())
print similaritiesFullRDD.count()
# TEST Identify common tokens from the full dataset (4f)
similarityTest = similaritiesFullRDD.filter(lambda ((aID, gURL), cs): aID == 'b00005lzly' and gURL == 'http://www.google.com/base/feeds/snippets/13823221823254120257').collect()
Test.assertEquals(len(similarityTest), 1, 'incorrect len(similarityTest)')
Test.assertTrue(abs(similarityTest[0][1] - 4.286548414e-06) < 0.000000000001, 'incorrect similarityTest fastCosineSimilarity')
Test.assertEquals(similaritiesFullRDD.count(), 2441100, 'incorrect similaritiesFullRDD.count()')
```
### **Part 5: Analysis**
#### Now we have an authoritative list of record-pair similarities, but we need a way to use those similarities to decide if two records are duplicates or not. The simplest approach is to pick a **threshold**. Pairs whose similarity is above the threshold are declared duplicates, and pairs below the threshold are declared distinct.
#### To decide where to set the threshold we need to understand what kind of errors result at different levels. If we set the threshold too low, we get more **false positives**, that is, record-pairs we say are duplicates that in reality are not. If we set the threshold too high, we get more **false negatives**, that is, record-pairs that really are duplicates but that we miss.
#### ER algorithms are evaluated by the common metrics of information retrieval and search called **precision** and **recall**. Precision asks of all the record-pairs marked duplicates, what fraction are true duplicates? Recall asks of all the true duplicates in the data, what fraction did we successfully find? As with false positives and false negatives, there is a trade-off between precision and recall. A third metric, called **F-measure**, takes the harmonic mean of precision and recall to measure overall goodness in a single value:
#### $$ Fmeasure = 2 \frac{precision * recall}{precision + recall} $$
> #### **Note**: In this part, we use the "gold standard" mapping from the included file to look up true duplicates, and the results of Part 4.
> #### **Note**: In this part, you will not be writing any code. We've written all of the code for you. Run each cell and then answer the quiz questions on Studio.
### **(5a) Counting True Positives, False Positives, and False Negatives**
#### We need functions that count True Positives (true duplicates above the threshold), and False Positives and False Negatives:
* #### We start with creating the `simsFullRDD` from our `similaritiesFullRDD` that consists of a pair of ((Amazon ID, Google URL), simlarity score)
* #### From this RDD, we create an RDD consisting of only the similarity scores
* #### To look up the similarity scores for true duplicates, we perform a left outer join using the `goldStandard` RDD and `simsFullRDD` and extract the
```
# Create an RDD of ((Amazon ID, Google URL), similarity score)
simsFullRDD = similaritiesFullRDD.map(lambda x: ("%s %s" % (x[0][0], x[0][1]), x[1]))
assert (simsFullRDD.count() == 2441100)
# Create an RDD of just the similarity scores
simsFullValuesRDD = (simsFullRDD
.map(lambda x: x[1])
.cache())
assert (simsFullValuesRDD.count() == 2441100)
# Look up all similarity scores for true duplicates
# This helper function will return the similarity score for records that are in the gold standard and the simsFullRDD (True positives), and will return 0 for records that are in the gold standard but not in simsFullRDD (False Negatives).
def gs_value(record):
if (record[1][1] is None):
return 0
else:
return record[1][1]
# Join the gold standard and simsFullRDD, and then extract the similarities scores using the helper function
trueDupSimsRDD = (goldStandard
.leftOuterJoin(simsFullRDD)
.map(gs_value)
.cache())
print 'There are %s true duplicates.' % trueDupSimsRDD.count()
assert(trueDupSimsRDD.count() == 1300)
```
#### The next step is to pick a threshold between 0 and 1 for the count of True Positives (true duplicates above the threshold). However, we would like to explore many different thresholds. To do this, we divide the space of thresholds into 100 bins, and take the following actions:
* #### We use Spark Accumulators to implement our counting function. We define a custom accumulator type, `VectorAccumulatorParam`, along with functions to initialize the accumulator's vector to zero, and to add two vectors. Note that we have to use the += operator because you can only add to an accumulator.
* #### We create a helper function to create a list with one entry (bit) set to a value and all others set to 0.
* #### We create 101 bins for the 100 threshold values between 0 and 1.
* #### Now, for each similarity score, we can compute the false positives. We do this by adding each similarity score to the appropriate bin of the vector. Then we remove true positives from the vector by using the gold standard data.
* #### We define functions for computing false positive and negative and true positives, for a given threshold.
```
from pyspark.accumulators import AccumulatorParam
class VectorAccumulatorParam(AccumulatorParam):
# Initialize the VectorAccumulator to 0
def zero(self, value):
return [0] * len(value)
# Add two VectorAccumulator variables
def addInPlace(self, val1, val2):
for i in xrange(len(val1)):
val1[i] += val2[i]
return val1
# Return a list with entry x set to value and all other entries set to 0
def set_bit(x, value, length):
bits = []
for y in xrange(length):
if (x == y):
bits.append(value)
else:
bits.append(0)
return bits
# Pre-bin counts of false positives for different threshold ranges
BINS = 101
nthresholds = 100
def bin(similarity):
return int(similarity * nthresholds)
# fpCounts[i] = number of entries (possible false positives) where bin(similarity) == i
zeros = [0] * BINS
fpCounts = sc.accumulator(zeros, VectorAccumulatorParam())
def add_element(score):
global fpCounts
b = bin(score)
fpCounts += set_bit(b, 1, BINS)
simsFullValuesRDD.foreach(add_element)
# Remove true positives from FP counts
def sub_element(score):
global fpCounts
b = bin(score)
fpCounts += set_bit(b, -1, BINS)
trueDupSimsRDD.foreach(sub_element)
def falsepos(threshold):
fpList = fpCounts.value
return sum([fpList[b] for b in range(0, BINS) if float(b) / nthresholds >= threshold])
def falseneg(threshold):
return trueDupSimsRDD.filter(lambda x: x < threshold).count()
def truepos(threshold):
return trueDupSimsRDD.count() - falsenegDict[threshold]
```
### **(5b) Precision, Recall, and F-measures**
#### We define functions so that we can compute the [Precision][precision-recall], [Recall][precision-recall], and [F-measure][f-measure] as a function of threshold value:
* #### Precision = true-positives / (true-positives + false-positives)
* #### Recall = true-positives / (true-positives + false-negatives)
* #### F-measure = 2 x Recall x Precision / (Recall + Precision)
[precision-recall]: https://en.wikipedia.org/wiki/Precision_and_recall
[f-measure]: https://en.wikipedia.org/wiki/Precision_and_recall#F-measure
```
# Precision = true-positives / (true-positives + false-positives)
# Recall = true-positives / (true-positives + false-negatives)
# F-measure = 2 x Recall x Precision / (Recall + Precision)
def precision(threshold):
tp = trueposDict[threshold]
return float(tp) / (tp + falseposDict[threshold])
def recall(threshold):
tp = trueposDict[threshold]
return float(tp) / (tp + falsenegDict[threshold])
def fmeasure(threshold):
r = recall(threshold)
p = precision(threshold)
return 2 * r * p / (r + p)
```
### **(5c) Line Plots**
#### We can make line plots of precision, recall, and F-measure as a function of threshold value, for thresholds between 0.0 and 1.0. You can change `nthresholds` (above in part **(5a)**) to change the threshold values to plot.
```
thresholds = [float(n) / nthresholds for n in range(0, nthresholds)]
falseposDict = dict([(t, falsepos(t)) for t in thresholds])
falsenegDict = dict([(t, falseneg(t)) for t in thresholds])
trueposDict = dict([(t, truepos(t)) for t in thresholds])
precisions = [precision(t) for t in thresholds]
recalls = [recall(t) for t in thresholds]
fmeasures = [fmeasure(t) for t in thresholds]
print precisions[0], fmeasures[0]
assert (abs(precisions[0] - 0.000532546802671) < 0.0000001)
assert (abs(fmeasures[0] - 0.00106452669505) < 0.0000001)
fig = plt.figure()
plt.plot(thresholds, precisions)
plt.plot(thresholds, recalls)
plt.plot(thresholds, fmeasures)
plt.legend(['Precision', 'Recall', 'F-measure'])
pass
```
### Discussion
#### State-of-the-art tools can get an F-measure of about 60% on this dataset. In this lab exercise, our best F-measure is closer to 40%. Look at some examples of errors (both False Positives and False Negatives) and think about what went wrong.
### There are several ways we might improve our simple classifier, including:
#### * Using additional attributes
#### * Performing better featurization of our textual data (e.g., stemming, n-grams, etc.)
#### * Using different similarity functions
|
github_jupyter
|
# Representación y visualización de datos
El aprendizaje automático trata de ajustar modelos a los datos; por esta razón, empezaremos discutiendo como los datos pueden ser representados para ser accesibles por el ordenador. Además de esto, nos basaremos en los ejemplos de matplotlib de la sección anterior para usarlos para representar datos.
## Datos en scikit-learn
Los datos en scikit-learn, salvo algunas excepciones, suelen estar almacenados en
**arrays de 2 dimensiones**, con forma `[n_samples, n_features]`. Muchos algoritmos aceptan también matrices ``scipy.sparse`` con la misma forma.
- **n_samples:** este es el número de ejemplos. Cada ejemplo es un item a procesar (por ejemplo, clasificar). Un ejemplo puede ser un documento, una imagen, un sonido, un vídeo, un objeto astronómico, una fila de una base de datos o de un fichero CSV, o cualquier cosa que se pueda describir usando un conjunto prefijado de trazas cuantitativas.
- **n_features:** este es el número de características descriptoras que se utilizan para describir cada item de forma cuantitativa. Las características son, generalmente, valores reales, aunque pueden ser categóricas o valores discretos.
El número de características debe ser fijado de antemano. Sin embargo, puede ser extremadamente alto (por ejemplo, millones de características), siendo cero en la mayoría de casos. En este tipo de datos, es buena idea usar matrices `scipy.sparse` que manejan mucho mejor la memoria.
Como ya comentamos en la sección anterior, representamos los ejemplos (puntos o instancias) como filas en el array de datos y almacenamos las características correspondientes, las "dimensiones", como columnas.
### Un ejemplo simple: el dataset Iris
Como ejemplo de un dataset simple, vamos a echar un vistazo al conjunto iris almacenado en scikit-learn.
Los datos consisten en medidas de tres especies de flores iris distintas:
Iris Setosa
<img src="figures/iris_setosa.jpg" width="50%">
Iris Versicolor
<img src="figures/iris_versicolor.jpg" width="50%">
Iris Virginica
<img src="figures/iris_virginica.jpg" width="50%">
### Pregunta rápida:
**Asumamos que estamos interesados en categorizar nuevos ejemplos; queremos predecir si una flor nueva va a ser Iris-Setosa, Iris-Versicolor, o Iris-Virginica. Basándonos en lo discutido en secciones anteriores, ¿cómo construiríamos este dataset?**
Recuerda: necesitamos un array 2D con forma (*shape*) `[n_samples x n_features]`.
- ¿Qué sería `n_samples`?
- ¿Qué podría ser `n_features`?
Recuerda que debe haber un número **fijo** de características por cada ejemplo, y cada característica *j* debe ser el mismo tipo de cantidad para cada ejemplo.
### Cargando el dataset Iris desde scikit-learn
Para futuros experimentos con algoritmos de aprendizaje automático, te recomendamos que añadas a favoritos el [Repositorio UCI](http://archive.ics.uci.edu/ml/), que aloja muchos de los datasets que se utilizan para probar los algoritmos de aprendizaje automático. Además, algunos de estos datasets ya están incluidos en scikit-learn, pudiendo así evitar tener que descargar, leer, convertir y limpiar los ficheros de texto o CSV. El listado de datasets ya disponibles en scikit learn puede consultarse [aquí](http://scikit-learn.org/stable/datasets/#toy-datasets).
Por ejemplo, scikit-learn contiene el dataset iris. Los datos consisten en:
- Características:
1. Longitud de sépalo en cm
2. Ancho de sépalo en cm
3. Longitud de pétalo en cm
4. Ancho de sépalo en cm
- Etiquetas a predecir:
1. Iris Setosa
2. Iris Versicolour
3. Iris Virginica
<img src="figures/petal_sepal.jpg" alt="Sepal" style="width: 50%;"/>
(Image: "Petal-sepal". Licensed under CC BY-SA 3.0 via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Petal-sepal.jpg#/media/File:Petal-sepal.jpg)
``scikit-learn`` incluye una copia del archivo CSV de iris junto con una función que lo lee a arrays de numpy:
```
from sklearn.datasets import load_iris
iris = load_iris()
```
El dataset es un objeto ``Bunch``. Puedes ver que contiene utilizando el método ``keys()``:
```
iris.keys()
```
Las características de cada flor se encuentra en el atributo ``data`` del dataset:
```
n_samples, n_features = iris.data.shape
print('Número de ejemplos:', n_samples)
print('Número de características:', n_features)
# sepal length, sepal width, petal length y petal width del primer ejemplo (primera flor)
print(iris.data[0])
```
La información sobre la clase de cada ejemplo se encuentra en el atributo ``target`` del dataset:
```
print(iris.data.shape)
print(iris.target.shape)
print(iris.target)
import numpy as np
np.bincount(iris.target)
```
La función de numpy llamada `bincount` (arriba) nos permite ver que las clases se distribuyen de forma uniforme en este conjunto de datos (50 flores de cada especie), donde:
- clase 0: Iris-Setosa
- clase 1: Iris-Versicolor
- clase 2: Iris-Virginica
Los nombres de las clases se almacenan en ``target_names``:
```
print(iris.target_names)
```
Estos datos tienen cuatro dimensiones, pero podemos visualizar una o dos de las dimensiones usando un histograma o un scatter. Primero, activamos el *matplotlib inline mode*:
```
%matplotlib inline
import matplotlib.pyplot as plt
x_index = 3
colors = ['blue', 'red', 'green']
for label, color in zip(range(len(iris.target_names)), colors):
plt.hist(iris.data[iris.target==label, x_index],
label=iris.target_names[label],
color=color)
plt.xlabel(iris.feature_names[x_index])
plt.legend(loc='upper right')
plt.show()
x_index = 3
y_index = 0
colors = ['blue', 'red', 'green']
for label, color in zip(range(len(iris.target_names)), colors):
plt.scatter(iris.data[iris.target==label, x_index],
iris.data[iris.target==label, y_index],
label=iris.target_names[label],
c=color)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index])
plt.legend(loc='upper left')
plt.show()
```
<div class="alert alert-success">
<b>Ejercicio</b>:
<ul>
<li>
**Cambia** `x_index` **e** `y_index` ** en el script anterior y encuentra una combinación de los dos parámetros que separe de la mejor forma posible las tres clases.**
</li>
<li>
Este ejercicio es un adelanto a lo que se denomina **reducción de dimensionalidad**, que veremos después.
</li>
</ul>
</div>
### Matrices scatterplot
En lugar de realizar los plots por separado, una herramienta común que utilizan los analistas son las **matrices scatterplot**.
Estas matrices muestran los scatter plots entre todas las características del dataset, así como los histogramas para ver la distribución de cada característica.
```
import pandas as pd
iris_df = pd.DataFrame(iris.data, columns=iris.feature_names)
pd.plotting.scatter_matrix(iris_df, c=iris.target, figsize=(8, 8));
```
## Otros datasets disponibles
[Scikit-learn pone a disposición de la comunidad una gran cantidad de datasets](http://scikit-learn.org/stable/datasets/#dataset-loading-utilities). Vienen en tres modos:
- **Packaged Data:** pequeños datasets ya disponibles en la distribución de scikit-learn, a los que se puede acceder mediante ``sklearn.datasets.load_*``
- **Downloadable Data:** estos datasets son más grandes y pueden descargarse mediante herramientas que scikit-learn
ya incluye. Estas herramientas están en ``sklearn.datasets.fetch_*``
- **Generated Data:** estos datasets se generan mediante modelos basados en semillas aleatorias (datasets sintéticos). Están disponibles en ``sklearn.datasets.make_*``
Puedes explorar las herramientas de datasets de scikit-learn usando la funcionalidad de autocompletado que tiene IPython. Tras importar el paquete ``datasets`` de ``sklearn``, teclea
datasets.load_<TAB>
o
datasets.fetch_<TAB>
o
datasets.make_<TAB>
para ver una lista de las funciones disponibles
```
from sklearn import datasets
```
Advertencia: muchos de estos datasets son bastante grandes y puede llevar bastante tiempo descargarlos.
Si comienzas una descarga con un libro de IPython y luego quieres detenerla, puedes utilizar la opción "kernel interrupt" accesible por el menú o con ``Ctrl-m i``.
Puedes presionar ``Ctrl-m h`` para una lista de todos los atajos ``ipython``.
## Cargando los datos de dígitos
Ahora vamos a ver otro dataset, donde podemos estudiar mejor como representar los datos. Podemos explorar los datos de la siguiente forma:
```
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
n_samples, n_features = digits.data.shape
print((n_samples, n_features))
print(digits.data[0])
print(digits.data[-1])
print(digits.target)
```
Aquí la etiqueta es directamente el dígito que representa cada ejemplo. Los datos consisten en un array de longitud 64... pero, ¿qué significan estos datos?
Una pista viene dada por el hecho de que tenemos dos versiones de los datos:
``data`` y ``images``. Vamos a echar un vistazo a ambas:
```
print(digits.data.shape)
print(digits.images.shape)
```
Podemos ver que son lo mismo, mediante un simple *reshaping*:
```
import numpy as np
print(np.all(digits.images.reshape((1797, 64)) == digits.data))
```
Vamos a visualizar los datos. Es un poco más complejo que el scatter plot que hicimos anteriormente.
```
# Configurar la figura
fig = plt.figure(figsize=(6, 6)) # tamaño en pulgadas
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# mostrar algunos dígitos: cada imagen es de 8x8
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# Etiquetar la imagen con el valor objetivo
ax.text(0, 7, str(digits.target[i]))
```
Ahora podemos saber que significan las características. Cada característica es una cantidad real que representa la oscuridad de un píxel en una imagen 8x8 de un dígito manuscrito.
Aunque cada ejemplo tiene datos que son inherentemente de dos dimensiones, la matriz de datos incluye estos datos 2D en un **solo vector**, contenido en cada **fila** de la misma.
<div class="alert alert-success">
<b>Ejercicio: trabajando con un dataset de reconocimiento facial</b>:
<ul>
<li>
Vamos a pararnos a explorar el dataset de reconocimiento facial de Olivetti.
Descarga los datos (sobre 1.4MB), y visualiza las caras.
Puedes copiar el código utilizado para visualizar los dígitos, modificándolo convenientemente.
</li>
</ul>
</div>
```
from sklearn.datasets import fetch_olivetti_faces
# descarga el dataset faces
# Utiliza el script anterior para representar las caras
# Pista: plt.cm.bone es un buen colormap para este dataset
```
|
github_jupyter
|
# Day 1
```
from sklearn.datasets import load_iris
import pandas as pd
import numpy as np
iris = load_iris()
df = pd.DataFrame(np.c_[iris['data'], iris['target']], columns = iris['feature_names'] + ['species'])
df['species'] = df['species'].replace([0,1,2], iris.target_names)
df.head()
import numpy as np
import matplotlib.pyplot as plt
rng = np.random.RandomState(42)
x = 10 * rng.rand(50)
y = 2 * x - 1 + rng.randn(50)
x
plt.scatter(x, y)
plt.show()
# 1
from sklearn.linear_model import LinearRegression
# 2
LinearRegression?
model_lr = LinearRegression(fit_intercept=True)
# 3
# x = data feature
# y = data target
x.shape
x_matriks = x[:, np.newaxis]
x_matriks.shape
# 4
# model_lr.fit(input_data, output_data)
model_lr.fit(x_matriks, y)
# Testing
x_test = np.linspace(10, 12, 15)
x_test = x_test[:, np.newaxis]
x_test
# 5
y_test = model_lr.predict(x_test)
y_test
y_train = model_lr.predict(x_matriks)
plt.scatter(x, y, color='r')
plt.plot(x, y_train, label="Model Training")
plt.plot(x_test, y_test, label="Test Result/hasil Prediksi")
plt.legend()
plt.show()
```
# Day 2
```
from sklearn.datasets import load_iris
import pandas as pd
import numpy as np
iris = load_iris()
df = pd.DataFrame(np.c_[iris['data'], iris['target']], columns = iris['feature_names'] + ['species'])
df.head()
iris
from scipy import stats
z = stats.zscore(df)
z
print(np.where(z>3))
# import class model
from sklearn.neighbors import KNeighborsClassifier
z[15][1]
# Membuat objek model dan memilih hyperparameter
# KNeighborsClassifier?
model_knn = KNeighborsClassifier(n_neighbors=6, weights='distance')
# Memisahkan data feature dan target
X = df.drop('species', axis=1)
y = df['species']
X
# Perintahkan model untuk mempelajari data dengan menggunakan method .fit()
model_knn.fit(X, y)
# predict
x_new = np.array([
[2.5, 4, 3, 0.1],
[1, 3.5, 1.7, 0.4],
[4, 1, 3, 0.3]
])
y_new = model_knn.predict(x_new)
y_new
# 0 = sentosa
# 1 = versicolor
# 2 = virginica
import numpy as np
import matplotlib.pyplot as plt
rng = np.random.RandomState(1)
x = 10*rng.rand(50)
y = 5*x + 10 + rng.rand(50)
plt.scatter(x, y)
plt.show()
from sklearn.linear_model import LinearRegression
model_lr = LinearRegression(fit_intercept=True)
model_lr.fit(x[:, np.newaxis], y)
y_predict = model_lr.predict(x[:, np.newaxis])
plt.plot(x, y_predict, color='r', label='Model Predicted Data')
plt.scatter(x, y, label='Actual Data')
plt.legend()
plt.show()
model_lr.coef_
model_lr.intercept_
# y = 5*x + 10 + rng.rand(50)
x = rng.rand(50, 3)
y = np.dot(x, [4, 2, 7]) + 20 # sama dengan x*4 + x*2 + x*7 + 20
x.shape
y
model_lr2 = LinearRegression(fit_intercept=True)
model_lr2.fit(x, y)
y_predict = model_lr2.predict(x)
model_lr2.coef_
model_lr2.intercept_
```
# Day 3
```
from sklearn.neighbors import KNeighborsClassifier
model_knn = KNeighborsClassifier(n_neighbors=2)
x_train = df.drop('species', axis=1)
y_train = df['species']
model_knn.fit(x_train, y_train)
# cara salah dalam mengevaluasi model
y_prediksi = model_knn.predict(x_train)
from sklearn.metrics import accuracy_score
score = accuracy_score(y_train, y_prediksi)
score
# cara yang benar
x = df.drop('species', axis=1)
y = df['species']
y.value_counts()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=21, stratify=y)
# x -> x_train, x_test -0.3-0.2
# y -> y_train, y_test -0.3-0.2
# valuenya sama karena stratify
y_train.value_counts()
print(x_train.shape)
print(x_test.shape)
model_knn = KNeighborsClassifier(n_neighbors=2)
model_knn.fit(x_train, y_train)
y_predik = model_knn.predict(x_test)
from sklearn.metrics import accuracy_score
score = accuracy_score(y_test, y_predik)
score
from sklearn.model_selection import cross_val_score
model_knn = KNeighborsClassifier(n_neighbors=2)
cv_result = cross_val_score(model_knn, x, y, cv=10)
cv_result.mean()
import pandas as pd
import numpy as np
colnames = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
df = pd.read_csv('pima-indians-diabetes.csv', names=colnames)
df.head()
df['class'].value_counts()
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import scale
X = df.drop('class', axis=1)
Xs = scale(X)
y = df['class']
X_train, X_test, y_train, y_test = train_test_split(Xs, y, random_state=21, stratify=y, test_size=0.2)
model_lr = LogisticRegression(random_state=21)
params_grid = {
'C':np.arange(0.1, 1, 0.1), 'class_weight':[{0:x, 1:1-x} for x in np.arange(0.1, 0.9, 0.1)]
}
gscv = GridSearchCV(model_lr, params_grid, cv=10, scoring='f1')
gscv.fit(X_train, y_train)
X_test
y_pred = gscv.predict(X_test)
y_pred
from sklearn.metrics import confusion_matrix, classification_report
confusion_matrix(y_test, y_pred, labels=[1, 0])
TP = 39
FN = 15
FP = 25
TN = 75
print(classification_report(y_test, y_pred))
# menghitung nilai precisi, recall, f-1 score dari model kita dalam memprediksi data yang positif
precision = TP/(TP+FP)
recall = TP/(TP+FN)
f1score = 2 * precision * recall / (precision + recall)
print(precision)
print(recall)
print(f1score)
# menghitung nilai precisi, recall, f-1 score dari model kita dalam memprediksi data yang negatif
precision = TN/(TN+FN)
recall = TN/(TN+FP)
f1score = (precision * recall * 2) / (precision + recall)
print(precision)
print(recall)
print(f1score)
```
# Day 4
```
from sklearn.datasets import load_iris
import pandas as pd
import numpy as np
colnames = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
df = pd.read_csv('pima-indians-diabetes.csv', names=colnames)
df.head()
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_validate, cross_val_score
X = df.drop('class', axis=1)
y = df['class']
model = KNeighborsClassifier(n_neighbors=5)
cv_score1 = cross_validate(model, X, y, cv=10, return_train_score=True)
cv_score2 = cross_val_score(model, X, y, cv=10)
cv_score1
cv_score2
cv_score1['test_score'].mean()
cv_score2.mean()
def knn_predict(k):
model = KNeighborsClassifier(n_neighbors=k)
score = cross_validate(model, X, y, cv=10, return_train_score=True)
train_score = score['train_score'].mean()
test_score = score['test_score'].mean()
return train_score, test_score
train_scores = []
test_scores = []
for k in range(2, 100):
# lakukan fitting
# kemudian scoring
train_score, test_score = knn_predict(k)
train_scores.append(train_score)
test_scores.append(test_score)
train_scores
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(14, 8))
ax.plot(range(2, 100), train_scores, marker='x', color='b', label='Train Scores')
ax.plot(range(2, 100), test_scores, marker='o', color='g', label='Test Scores')
ax.set_xlabel('Nilai K')
ax.set_ylabel('Score')
fig.legend()
plt.show()
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
model = KNeighborsClassifier()
param_grid = {'n_neighbors':np.arange(5, 50), 'weights':['distance', 'uniform']}
gscv = GridSearchCV(model, param_grid=param_grid, scoring='accuracy', cv=5)
gscv.fit(X, y)
gscv.best_params_
gscv.best_score_
rscv = RandomizedSearchCV(model, param_grid, n_iter=15, scoring='accuracy', cv=5)
rscv.fit(X, y)
rscv.best_params_
rscv.best_score_
```
# Day 5
```
data = {
'pendidikan_terakhir' : ['SD', 'SMP', 'SMA', 'SMP', 'SMP'],
'tempat_tinggal' : ['Bandung', 'Garut', 'Bandung', 'Cirebon', 'Jakarta'],
'status' : ['Menikah', 'Jomblo', 'Janda', 'Jomblo', 'Duda'],
'tingkat_ekonomi' : ['Kurang Mampu', 'Berkecukupan', 'Mampu', 'Sangat Mampu', 'Mampu'],
'jumlah_anak' : [1, 4, 2, 0, 3]
}
import pandas as pd
df = pd.DataFrame(data)
df.head()
df = pd.get_dummies(df, columns=['tempat_tinggal', 'status'])
df
obj_dict = {
'Kurang Mampu' : 0,
'Berkecukupan' : 1,
'Mampu' : 2,
'Sangat Mampu' : 3
}
df['tingkat_ekonomi'] = df['tingkat_ekonomi'].replace(obj_dict)
df['tingkat_ekonomi']
import numpy as np
data = {
'pendidikan_terakhir' : [np.nan, 'SMP', 'SD', 'SMP', 'SMP', 'SD', 'SMP', 'SMA', 'SD'],
'tingkat_ekonomi' : [0, 1, 2, 3, 2, 2, 1, 1, 3],
# 'jumlah_anak' : [1, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, 1, 2]
'jumlah_anak' : [1, np.nan, np.nan, 1, 1, 1, 3, 1, 2]
}
data_ts = {
'Hari' : [1, 2, 3, 4, 5],
'Jumlah' : [12, 23, np.nan, 12, 20]
}
df = pd.DataFrame(data)
df_ts = pd.DataFrame(data_ts)
df
```
5 Cara dalam menghandle missing value:
1. Drop missing value : Jumlah missing value data banyak
2. Filling with mean/median : berlaku untuk data yang bertipe numerik
3. Filling with modus : berlaku untuk data yang bertipe kategori
4. Filling with bffill atau ffill
5. KNN
```
1. # drop berdasarkan row
df.dropna(axis=0)
# 1. drop berdasarkan column
df.drop(['jumlah_anak'], axis=1)
# 2 kelemahannya kurang akurat
df['jumlah_anak'] = df['jumlah_anak'].fillna(df['jumlah_anak'].mean())
df['jumlah_anak']
df['jumlah_anak'] = df['jumlah_anak'].astype(int)
df['jumlah_anak']
df
# 3
df['pendidikan_terakhir'].value_counts()
df['pendidikan_terakhir'] = df['pendidikan_terakhir'].fillna('SMP')
df
# 4 bfill nan diisi dengan nilai sebelumnya
df_ts.fillna(method='bfill')
# 4 ffill nan diisi dengan nilai sebelumnya
df_ts.fillna(method='ffill')
df
from sklearn.impute import KNNImputer
imp = KNNImputer(n_neighbors=5)
# imp.fit_transform(df['jumlah_anak'][:, np.newaxis])
imp.fit_transform(df[['jumlah_anak', 'tingkat_ekonomi']])
import pandas as pd
colnames = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
df = pd.read_csv('pima-indians-diabetes.csv', names=colnames)
df.head()
df.describe()
X = df.drop('class', axis=1)
X.head()
from sklearn.preprocessing import StandardScaler
stdscalar = StandardScaler()
datascale = stdscalar.fit_transform(X)
colnames = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age']
dfscale = pd.DataFrame(datascale, columns=colnames)
dfscale
dfscale.describe()
from sklearn.preprocessing import Normalizer
normscaler = Normalizer()
datanorm = normscaler.fit_transform(X)
colnames = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age']
dfnorm = pd.DataFrame(datanorm, columns=colnames)
dfnorm
dfnorm.describe()
```
1. Normalization digunakan ketika kita tidak tahu bahwa kita tidak harus memiliki asumsi bahwa data kita itu memiliki distribusi normal, dan kita memakai algoritma ML yang tidak harus mengasumsikan bentuk distribusi dari data... contohnya KNN, neural network, dll
2. Standardization apabila data kita berasumsi memiliki distribusi normal
|
github_jupyter
|
# Table of Contents
<p><div class="lev1 toc-item"><a href="#Simulated-annealing-in-Python" data-toc-modified-id="Simulated-annealing-in-Python-1"><span class="toc-item-num">1 </span>Simulated annealing in Python</a></div><div class="lev2 toc-item"><a href="#References" data-toc-modified-id="References-11"><span class="toc-item-num">1.1 </span>References</a></div><div class="lev2 toc-item"><a href="#See-also" data-toc-modified-id="See-also-12"><span class="toc-item-num">1.2 </span>See also</a></div><div class="lev2 toc-item"><a href="#About" data-toc-modified-id="About-13"><span class="toc-item-num">1.3 </span>About</a></div><div class="lev2 toc-item"><a href="#Algorithm" data-toc-modified-id="Algorithm-14"><span class="toc-item-num">1.4 </span>Algorithm</a></div><div class="lev2 toc-item"><a href="#Basic-but-generic-Python-code" data-toc-modified-id="Basic-but-generic-Python-code-15"><span class="toc-item-num">1.5 </span>Basic but generic Python code</a></div><div class="lev2 toc-item"><a href="#Basic-example" data-toc-modified-id="Basic-example-16"><span class="toc-item-num">1.6 </span>Basic example</a></div><div class="lev2 toc-item"><a href="#Visualizing-the-steps" data-toc-modified-id="Visualizing-the-steps-17"><span class="toc-item-num">1.7 </span>Visualizing the steps</a></div><div class="lev2 toc-item"><a href="#More-visualizations" data-toc-modified-id="More-visualizations-18"><span class="toc-item-num">1.8 </span>More visualizations</a></div>
# Simulated annealing in Python
This small notebook implements, in [Python 3](https://docs.python.org/3/), the [simulated annealing](https://en.wikipedia.org/wiki/Simulated_annealing) algorithm for numerical optimization.
## References
- The Wikipedia page: [simulated annealing](https://en.wikipedia.org/wiki/Simulated_annealing).
- It was implemented in `scipy.optimize` before version 0.14: [`scipy.optimize.anneal`](https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.anneal.html).
- [This blog post](http://apmonitor.com/me575/index.php/Main/SimulatedAnnealing).
- These Stack Overflow questions: [15853513](https://stackoverflow.com/questions/15853513/) and [19757551](https://stackoverflow.com/questions/19757551/).
## See also
- For a real-world use of simulated annealing, this Python module seems useful: [perrygeo/simanneal on GitHub](https://github.com/perrygeo/simanneal).
## About
- *Date:* 20/07/2017.
- *Author:* [Lilian Besson](https://GitHub.com/Naereen), (C) 2017.
- *Licence:* [MIT Licence](http://lbesson.mit-license.org).
----
> This notebook should be compatible with both Python versions, [2](https://docs.python.org/2/) and [3](https://docs.python.org/3/).
```
from __future__ import print_function, division # Python 2 compatibility if needed
import numpy as np
import numpy.random as rn
import matplotlib.pyplot as plt # to plot
import matplotlib as mpl
from scipy import optimize # to compare
import seaborn as sns
sns.set(context="talk", style="darkgrid", palette="hls", font="sans-serif", font_scale=1.05)
FIGSIZE = (19, 8) #: Figure size, in inches!
mpl.rcParams['figure.figsize'] = FIGSIZE
```
----
## Algorithm
The following pseudocode presents the simulated annealing heuristic.
- It starts from a state $s_0$ and continues to either a maximum of $k_{\max}$ steps or until a state with an energy of $e_{\min}$ or less is found.
- In the process, the call $\mathrm{neighbour}(s)$ should generate a randomly chosen neighbour of a given state $s$.
- The annealing schedule is defined by the call $\mathrm{temperature}(r)$, which should yield the temperature to use, given the fraction $r$ of the time budget that has been expended so far.
> **Simulated Annealing**:
>
> - Let $s$ = $s_0$
> - For $k = 0$ through $k_{\max}$ (exclusive):
> + $T := \mathrm{temperature}(k ∕ k_{\max})$
> + Pick a random neighbour, $s_{\mathrm{new}} := \mathrm{neighbour}(s)$
> + If $P(E(s), E(s_{\mathrm{new}}), T) \geq \mathrm{random}(0, 1)$:
> * $s := s_{\mathrm{new}}$
> - Output: the final state $s$
----
## Basic but generic Python code
Let us start with a very generic implementation:
```
def annealing(random_start,
cost_function,
random_neighbour,
acceptance,
temperature,
maxsteps=1000,
debug=True):
""" Optimize the black-box function 'cost_function' with the simulated annealing algorithm."""
state = random_start()
cost = cost_function(state)
states, costs = [state], [cost]
for step in range(maxsteps):
fraction = step / float(maxsteps)
T = temperature(fraction)
new_state = random_neighbour(state, fraction)
new_cost = cost_function(new_state)
if debug: print("Step #{:>2}/{:>2} : T = {:>4.3g}, state = {:>4.3g}, cost = {:>4.3g}, new_state = {:>4.3g}, new_cost = {:>4.3g} ...".format(step, maxsteps, T, state, cost, new_state, new_cost))
if acceptance_probability(cost, new_cost, T) > rn.random():
state, cost = new_state, new_cost
states.append(state)
costs.append(cost)
# print(" ==> Accept it!")
# else:
# print(" ==> Reject it...")
return state, cost_function(state), states, costs
```
----
## Basic example
We will use this to find the global minimum of the function $x \mapsto x^2$ on $[-10, 10]$.
```
interval = (-10, 10)
def f(x):
""" Function to minimize."""
return x ** 2
def clip(x):
""" Force x to be in the interval."""
a, b = interval
return max(min(x, b), a)
def random_start():
""" Random point in the interval."""
a, b = interval
return a + (b - a) * rn.random_sample()
def cost_function(x):
""" Cost of x = f(x)."""
return f(x)
def random_neighbour(x, fraction=1):
"""Move a little bit x, from the left or the right."""
amplitude = (max(interval) - min(interval)) * fraction / 10
delta = (-amplitude/2.) + amplitude * rn.random_sample()
return clip(x + delta)
def acceptance_probability(cost, new_cost, temperature):
if new_cost < cost:
# print(" - Acceptance probabilty = 1 as new_cost = {} < cost = {}...".format(new_cost, cost))
return 1
else:
p = np.exp(- (new_cost - cost) / temperature)
# print(" - Acceptance probabilty = {:.3g}...".format(p))
return p
def temperature(fraction):
""" Example of temperature dicreasing as the process goes on."""
return max(0.01, min(1, 1 - fraction))
```
Let's try!
```
annealing(random_start, cost_function, random_neighbour, acceptance_probability, temperature, maxsteps=30, debug=True);
```
Now with more steps:
```
state, c, states, costs = annealing(random_start, cost_function, random_neighbour, acceptance_probability, temperature, maxsteps=1000, debug=False)
state
c
```
----
## Visualizing the steps
```
def see_annealing(states, costs):
plt.figure()
plt.suptitle("Evolution of states and costs of the simulated annealing")
plt.subplot(121)
plt.plot(states, 'r')
plt.title("States")
plt.subplot(122)
plt.plot(costs, 'b')
plt.title("Costs")
plt.show()
see_annealing(states, costs)
```
----
## More visualizations
```
def visualize_annealing(cost_function):
state, c, states, costs = annealing(random_start, cost_function, random_neighbour, acceptance_probability, temperature, maxsteps=1000, debug=False)
see_annealing(states, costs)
return state, c
visualize_annealing(lambda x: x**3)
visualize_annealing(lambda x: x**2)
visualize_annealing(np.abs)
visualize_annealing(np.cos)
visualize_annealing(lambda x: np.sin(x) + np.cos(x))
```
In all these examples, the simulated annealing converges to a global minimum.
It can be non-unique, but it is found.
----
> That's it for today, folks!
More notebooks can be found on [my GitHub page](https://GitHub.com/Naereen/notebooks).
|
github_jupyter
|
# Writing a Device driver
### Basic structure
Here is a simple (but complete and functional) code block that implements a VISA driver for a power sensor:
```
import labbench as lb
import pandas as pd
# Specific driver definitions are implemented by subclassing classes like lb.VISADevice
class PowerSensor(lb.VISADevice):
initiate_continuous = lb.property.bool(key='INIT:CONT')
output_trigger = lb.property.bool(key='OUTP:TRIG')
trigger_source = lb.property.str(key='TRIG:SOUR', only=['IMM','INT','EXT','BUS','INT1'])
trigger_count = lb.property.int(key='TRIG:COUN', min=1,max=200,step=1)
measurement_rate = lb.property.str(key='SENS:MRAT', only=['NORM','DOUB','FAST'])
sweep_aperture = lb.property.float(key='SWE:APER', min=20e-6, max=200e-3,help='time (in s)')
frequency = lb.property.float(key='SENS:FREQ', min=10e6, max=18e9,help='center frequency (Hz)')
def preset (self):
""" Apply the instrument's preset state.
"""
self.write('SYST:PRES')
def fetch (self):
""" Get already-acquired data from the instrument.
Returns:
The data trace packaged as a pd.DataFrame
"""
response = self.query('FETC?').split(',')
if len(response)==1:
return float(response[0])
else:
return pd.to_numeric(pd.Series(response))
```
Let's work through what this does.
### 1. Every `labbench` driver is a subclass of a labbench Device class, such as lb.VISADevice:
This is the definition of the PowerSensor:
```python
class PowerSensor(lb.VISADevice):
# ...
```
This single line gives our power sensor driver all of the general capabilities of a VISA driver this driver class (known as "subclassing" "inheriting" in software engineering). This means that in this one line, the PowerSensor driver has adopted _all of the same member and attribute features as a "plain" VISADevice_. The `VISADevice` class helps streamline use of the `pyvisa` with features like
* managing connection and disconnection, given a VISA resource string;
* shortcuts for accessing simple instrument states, implemented entirely based on definitions (discussed below); and
* wrapper methods (i.e., member functions) for pyvisa resource `write` and `query` methods.
A more complete listing of everything that comes with `lb.VISADevice` is in the [programming reference](http://ssm.ipages.nist.gov/labbench/labbench.html#labbench.backends.VISADevice).
This power sensor driver definition is just that - a definition. To _use_ the driver and connect to the instrument in the lab, instantiate it and connect to the device. This is the simplest recommended way to instantiate, connect, and then disconnect in a script:
```python
# Here is the `with` block
with PowerSensor('TCPIP::10.0.0.1::::INSTR') as sensor:
pass
# The sensor is connected in this "with" block. Afterward, it disconnects, even
# if there is an exception. Automation code that uses the sensor would go here.
# Now the `with` block is done and we're disconnected
print('Disconnected, all done')
```
It's nice to leave the sensor connected sometimes, like for interactive play on a python prompt. In that case, you can manually connect and disconnect:
```python
sensor = PowerSensor('TCPIP::10.0.0.1::::INSTR')
sensor.connect()
# The sensor is connected now. Automation code that uses the sensor would go here.
sensor.disconnect() # We have to manually disconnect when we don't use a with block.
print('Disconnected, all done')
```
There are two key pieces here:
* The instantiation, `PowerSensor('TCPIP::10.0.0.1::::INSTR')`, is where we create a power sensor object that we can interact with. All `VISADevice` drivers use this standard resource string formatting; other types of drivers have different formats.
* The `with` block (talked about under the name _context management_ in python language documents) serves two functions for any labbench driver (not just VISADevice):
1. The instrument is connected at the start of the with block
2. guarantees that the instrument will be disconnected after the with end of the with block, _even if there is an error inside the block!_
### 2. Getting and setting simple parameters in the device the `state` object
##### Reading the definition
Each driver has an attribute called `state`. It is an optional way to give your users shortcuts to get and set simple instrument settings. This is the definition from the example above:
```python
initiate_continuous = lb.Bool (key='INIT:CONT')
output_trigger = lb.Bool (key='OUTP:TRIG')
trigger_source = lb.EnumBytes (key='TRIG:SOUR', values=['IMM','INT','EXT','BUS','INT1'])
trigger_count = lb.Int (key='TRIG:COUN', min=1,max=200,step=1)
measurement_rate = lb.EnumBytes (key='SENS:MRAT', values=['NORM','DOUB','FAST'])
sweep_aperture = lb.Float (key='SWE:APER', min=20e-6, max=200e-3,help='time (in s)')
frequency = lb.Float (key='SENS:FREQ', min=10e6, max=18e9,help='input center frequency (in Hz)')
```
The `VISADevice` driver uses the metadata given for each descriptor above to determine how to communicate with the remote instrument on assignment. Behind the scenes, the `state` object has extra features that can monitor changes to these states to automatically record the changes we make to these states to a database, or (in the future) automatically generate a GUI front-panel.
*Every* labbench driver has a state object, including at least the boolean state called `connected` (indicating whether the host computer is connected with the remote device is connected or not).
---
##### Using state attributes
Making an instance of PowerSensor - in the example, this was `PowerSensor('TCPIP::10.0.0.1::::INSTR')` - causes the `state` object to become interactive.
Assignment causes causes the setting to be applied to the instrument. For example,
`sensor.state.initiate_continuous = True` makes machinery inside `lb.VISADevice` do the following:
1. validate that `True` is a valid python boolean value (because we defined it as `lb.Bool`)
2. convert the python boolean `True` to a string (because `lb.VISADevice` knows SCPI uses string commands)
3. send the SCPI string `'INIT:CONT TRUE'` (because we told it the command string is `'INIT:CONT'`, and by default it assumes that settings should be applied as `'<command> <value>'`)
Likewise, a parameter "get" operation is triggered by simply using the attribute. The statement `print(sensor.state.initiate_continuous)` triggers `lb.VISADevice` to do the following:
1. an SCPI query with the string `'INIT:CONT?'` (because we told it the command string is `'INIT:CONT'`, and by default it assumes that settings should be applied as `'<command>?'` with return values reported in a response string),
2. the response string is converted to a python boolean type (because we defined it as `lb.Bool`),
3. the converted boolean value is passed to the `print` function for display.
##### Example of assigning to and from states
Here is working example that gets and sets parameter values by communicating with the device.
```python
with PowerSensor('TCPIP::10.0.0.1::::INSTR') as sensor:
# This prints True if we're still in the with block
print(sensor.state.isopen)
# Use SCPI to request the identity of the sensor,
# then return and print it. This was inherited from
# VISADevice, so it is available on any VISADevice driver.
print(sensor.state.identity)
# PowerSensor.state.frequency is defined as a float. Assigning
# to it causes logic inherited from lb.VISADevice
# to convert this to a string, and then write the SCPI string
# 'SENS:FREQ 2.45e9' to the instrument.
sensor.state.frequency = 2.45e9 # Set the power sensor center frequency to 2.45e9 GHz
# We can also access the remote value of sensor.state.frequency.
# Behind the scenes, each time we fetch the value, magic in
# lb.VISADevice retrieves the current value from the instrument
# with the SCPI query 'SENS:FREQ?', and then converts it to a floating point
# number.
print('The sensor frequency is {} GHz'.format(sensor.state.frequency/1e9))
print(sensor.state.isopen) # Prints False - we're disconnected
```
Simply put: assigning to or from with the attribute in the driver state instance causes remote set or get operations. The python data type matches the definition in the `state` class.
##### Discovering and navigating states
Inheriting from `VISADevice` means that `PowerSensor.state` includes the seven states defined here, plus all others listed provided by VISADevice.state. Since these aren't listed here, it can get confusing tracking what has been inherited (like in other object-oriented libraries). Fortunately, there are many ways to explore the entire list of states that have been inherited from the parent state class:
1. Look it up [in the API reference manual](http://ssm.ipages.nist.gov/labbench/labbench.html#labbench.visa.VISADevice.state)
2. When working with an instantiated driver object in an ipython or jupyter notebook command prompt, type `lb.VISADevice.state.` and press tab to autocomplete a list of valid options. You'll also see some functions to do esoteric things with these states.
3. When working in an editor like pycharm or spyder, you can ctrl+click on the right side of `VISADevice.state` to skip directly to looking at the definition of `VISADevice.state` in the `labbench` library
4. When working in any kind of python prompt, you can use the `help` function
```python
help(PowerSensor.state)
```
5. When working in an ipython or jupyter prompt, a nicer option than 4. is the ipython help magick:
```python
PowerSensor.state?
```
##### Writing state attributes
The way we code this is a little unusual outside of python packages for web development. When we write a driver class, we add attributes defined with helper information such as
- the python type that should represent the parameter
- bounds for acceptable values of the parameter
- descriptive "help" information for the user
These attributes are a kind of python type state class is a _descriptor_. We call them "traits" because following an underlying library that we extend, [traitlets](https://github.com/ipython/traitlets) under the hood. The example includes seven state traits.
After instantiating with `PowerSensor()`, we can start interacting with `sensor.state`. Each one is now a live object we can assign to and use like any other python object. The difference is, each time we get the value, it is queried from the instrument, and each time we assign to it (the normal `=` operator), a set command goes to the instrument to set it.
The definition above includes metadata that dictates the python data type handled for this assignment operation, and how it should be converted:
| **Descriptor metadata type** | **Uses in `PowerSensor` example** | **Behavior depends on the Device implementation** |
|--------------------------------- |------------------------------------|----------------------------------- |
| Python data type for assignment | `lb.Float`, `lb.EnumBytes`, etc. | No |
| Data validation settings | `min`,`max`,`step` (for numbers) | No |
| | `values` (for enumerated types) | No |
| Documentation strings | `help` | No |
| Associated backend command | `command` | Yes |
Some types of drivers ignore `command` keyword, as discussed in [how to write a labbench device driver](how to write a device driver).
### 3. Device methods for commands and data acquisition
The `state` class above is useful for remote assignment operations on simple scalar data types. Supporting a broader collection of operation types ("trigger a measurement," "fetch and return measurement data," etc.) need the flexibility of more general-purpose functions. In python, a member function of a class is called a method.
Here are the methods defined in `PowerSensor`:
```python
def preset (self):
self.write('SYST:PRES')
def fetch (self):
response = self.query('FETC?').split(',')
if len(response)==1:
return float(response[0])
else:
return pd.to_numeric(pd.Series(response))
```
These are the methods that are specific to our power sensor device.
* The `preset` function tells the device to revert to its default state.
* The `fetch` method performs some text processing on the response from the device, and returns either a single scalar or a pandas Series if the result is a sequence of power values.
The `labbench` convention is that the names of these methods are verbs (or sentence predicates, when single words are not specific enough).
##### Example data acquisition script
Here is an example that presets the device, sets the center frequency to 2.45 GHz, and then collects 10 power samples:
```
with PowerSensor('TCPIP::10.0.0.1::::INSTR') as sensor:
print('Connected to power sensor {}'.format(sensor.state.identity))
sensor.preset()
sensor.wait() # VISADevice includes in the standard VISA wait method, which sends SCPI '*WAI'
sensor.state.frequency = 2.45e9 # Set the power sensor center frequency to 2.45e9 GHz
power_levels = pd.Series([sensor.fetch() for i in range(10)])
print('All done! Got these power levels: ')
print(power_levels)
```
##### Discovering and navigating device driver methods
Inheritance has similar implications as it does for the `VISADevice.state` class. Inheriting from `VISADevice` means that `PowerSensor` includes the `preset` and `fetch` methods, plus many more from `lb.VISADevice` (some of which it inherited from `lb.Device`). Since these aren't listed in the example definition above, it can get confusing tracking what methods are available through inheritance (like in other object-oriented libraries). Sometimes, informally, this confusion is called "abstraction halitosis." Fortunately, there are many ways to identify the available objects and methods:
1. Look it up [in the API reference manual](http://ssm.ipages.nist.gov/labbench/labbench.html#labbench.visa.VISADevice)
2. When working with an instantiated driver object in an ipython or jupyter notebook command prompt, type `lb.VISADevice.` and press tab to autocomplete a list of valid options. You'll also see some functions to do esoteric things with these states.
3. When working in an editor like pycharm or spyder, you can ctrl+click on the right side of `VISADevice` to skip directly to looking at the definition of `VISADevice` in the `labbench` library
4. When working in any kind of python prompt, you can use the `help` function
```python
help(PowerSensor)
```
5. When working in an ipython or jupyter prompt, a nicely formatted version of 4. is the ipython help magick:
```python
PowerSensor?
```
## Miscellaneous extras
##### Connecting to multiple devices
The best way to connect to multiple devices is to use a single `with` block. For example, a 10-sample acquisition with two power sensors might look like this:
```
with PowerSensor('TCPIP::10.0.0.1::::INSTR') as sensor1,\
PowerSensor('TCPIP::10.0.0.2::::INSTR') as sensor2:
print('Connected to power sensors')
for sensor in sensor1, sensor2:
sensor.preset()
sensor.wait() # VISADevice includes in the standard VISA wait method, which sends SCPI '*WAI'
sensor.state.frequency = 2.45e9 # Set the power sensor center frequency to 2.45e9 GHz
power_levels = pd.DataFrame([[sensor1.fetch(),sensor2.fetch()] for i in range(10)])
print('All done! Got these power levels: ')
print(power_levels)
```
##### Execute a function on state changes
Database management and user interface tools make extensive use of callbacks, which gives an opportunity for you to execute custom code any time an assignment causes a state to change. A state change can occur in a couple of ways:
* This triggers a callback if 2.45e9 is different than the last observed frequency:
```python
sensor.state.frequency = 2.45e9
```
* This triggers a callback if the instrument returns a frequency that is is different than the last observed frequency
```python
current_freq = sensor.state.frequency
```
Configure a function call on an observed change with the `observe` method in `sensor.state`:
```
def callback(change):
""" the callback function is given a single argument. change
is a dictionary containing the descriptor ('frequency'),
the state instance that contains frequency, and both
the old and new values.
"""
# insert GUI update here?
# commit updated state to a database here?
print(change)
with PowerSensor('TCPIP::10.0.0.1::::INSTR') as sensor:
sensor.state.observe(callback)
sensor.preset()
sensor.wait() # VISADevice includes in the standard VISA wait method, which sends SCPI '*WAI'
sensor.state.frequency = 2.45e9 # Set the power sensor center frequency to 2.45e9 GHz
print('All done! Got these power levels: ')
print(power_levels)
```
Use of callbacks can help separate the actual measurement loop (the contents of the `with` block) from other functions for debugging, GUI, and database management. The result can be code that is more clear.
|
github_jupyter
|
# Computation on Arrays: Broadcasting
We saw in the previous section how NumPy's universal functions can be used to *vectorize* operations and thereby remove slow Python loops.
Another means of vectorizing operations is to use NumPy's *broadcasting* functionality.
Broadcasting is simply a set of rules for applying binary ufuncs (e.g., addition, subtraction, multiplication, etc.) on arrays of different sizes.
## Introducing Broadcasting
Recall that for arrays of the same size, binary operations are performed on an element-by-element basis:
```
import numpy as np
a = np.array([0, 1, 2])
b = np.array([5, 5, 5])
a + b
```
Broadcasting allows these types of binary operations to be performed on arrays of different sizes–for example, we can just as easily add a scalar (think of it as a zero-dimensional array) to an array:
```
a + 5
```
We can think of this as an operation that stretches or duplicates the value ``5`` into the array ``[5, 5, 5]``, and adds the results.
The advantage of NumPy's broadcasting is that this duplication of values does not actually take place, but it is a useful mental model as we think about broadcasting.
We can similarly extend this to arrays of higher dimension. Observe the result when we add a one-dimensional array to a two-dimensional array:
```
M = np.ones((3, 3))
M
M + a
```
Here the one-dimensional array ``a`` is stretched, or broadcast across the second dimension in order to match the shape of ``M``.
While these examples are relatively easy to understand, more complicated cases can involve broadcasting of both arrays. Consider the following example:
```
a = np.arange(3)
b = np.arange(3)[:, np.newaxis]
print(a)
print(b)
a + b
```
Just as before we stretched or broadcasted one value to match the shape of the other, here we've stretched *both* ``a`` and ``b`` to match a common shape, and the result is a two-dimensional array!
The geometry of these examples is visualized in the following figure.

The light boxes represent the broadcasted values: again, this extra memory is not actually allocated in the course of the operation, but it can be useful conceptually to imagine that it is.
## Rules of Broadcasting
Broadcasting in NumPy follows a strict set of rules to determine the interaction between the two arrays:
- Rule 1: If the two arrays differ in their number of dimensions, the shape of the one with fewer dimensions is *padded* with ones on its leading (left) side.
- Rule 2: If the shape of the two arrays does not match in any dimension, the array with shape equal to 1 in that dimension is stretched to match the other shape.
- Rule 3: If in any dimension the sizes disagree and neither is equal to 1, an error is raised.
To make these rules clear, let's consider a few examples in detail.
### Broadcasting example 1
Let's look at adding a two-dimensional array to a one-dimensional array:
```
M = np.ones((2, 3))
a = np.arange(3)
```
Let's consider an operation on these two arrays. The shape of the arrays are
- ``M.shape = (2, 3)``
- ``a.shape = (3,)``
We see by rule 1 that the array ``a`` has fewer dimensions, so we pad it on the left with ones:
- ``M.shape -> (2, 3)``
- ``a.shape -> (1, 3)``
By rule 2, we now see that the first dimension disagrees, so we stretch this dimension to match:
- ``M.shape -> (2, 3)``
- ``a.shape -> (2, 3)``
The shapes match, and we see that the final shape will be ``(2, 3)``:
```
M + a
```
### Broadcasting example 2
Let's take a look at an example where both arrays need to be broadcast:
```
a = np.arange(3).reshape((3, 1))
b = np.arange(3)
```
Again, we'll start by writing out the shape of the arrays:
- ``a.shape = (3, 1)``
- ``b.shape = (3,)``
Rule 1 says we must pad the shape of ``b`` with ones:
- ``a.shape -> (3, 1)``
- ``b.shape -> (1, 3)``
And rule 2 tells us that we upgrade each of these ones to match the corresponding size of the other array:
- ``a.shape -> (3, 3)``
- ``b.shape -> (3, 3)``
Because the result matches, these shapes are compatible. We can see this here:
```
a + b
```
### Broadcasting example 3
Now let's take a look at an example in which the two arrays are not compatible:
```
M = np.ones((3, 2))
a = np.arange(3)
```
This is just a slightly different situation than in the first example: the matrix ``M`` is transposed.
How does this affect the calculation? The shape of the arrays are
- ``M.shape = (3, 2)``
- ``a.shape = (3,)``
Again, rule 1 tells us that we must pad the shape of ``a`` with ones:
- ``M.shape -> (3, 2)``
- ``a.shape -> (1, 3)``
By rule 2, the first dimension of ``a`` is stretched to match that of ``M``:
- ``M.shape -> (3, 2)``
- ``a.shape -> (3, 3)``
Now we hit rule 3–the final shapes do not match, so these two arrays are incompatible, as we can observe by attempting this operation:
```
M + a
```
Note the potential confusion here: you could imagine making ``a`` and ``M`` compatible by, say, padding ``a``'s shape with ones on the right rather than the left.
But this is not how the broadcasting rules work!
That sort of flexibility might be useful in some cases, but it would lead to potential areas of ambiguity.
If right-side padding is what you'd like, you can do this explicitly by reshaping the array (we'll use the ``np.newaxis`` keyword introduced in The Basics of NumPy Arrays):
```
a[:, np.newaxis].shape
M + a[:, np.newaxis]
```
Also note that while we've been focusing on the ``+`` operator here, these broadcasting rules apply to *any* binary ``ufunc``.
For example, here is the ``logaddexp(a, b)`` function, which computes ``log(exp(a) + exp(b))`` with more precision than the naive approach:
```
np.logaddexp(M, a[:, np.newaxis])
```
For more information on the many available universal functions, refer to Computation on NumPy Arrays: Universal Functions.
## Broadcasting in Practice
Broadcasting operations form the core of many examples we'll see throughout this book.
We'll now take a look at a couple simple examples of where they can be useful.
### Centering an array
In the previous section, we saw that ufuncs allow a NumPy user to remove the need to explicitly write slow Python loops. Broadcasting extends this ability.
One commonly seen example is when centering an array of data.
Imagine you have an array of 10 observations, each of which consists of 3 values.
Using the standard convention, we'll store this in a $10 \times 3$ array:
```
X = np.random.random((10, 3))
```
We can compute the mean of each feature using the ``mean`` aggregate across the first dimension:
```
Xmean = X.mean(0)
Xmean
```
And now we can center the ``X`` array by subtracting the mean (this is a broadcasting operation):
```
X_centered = X - Xmean
```
To double-check that we've done this correctly, we can check that the centered array has near zero mean:
```
X_centered.mean(0)
```
To within machine precision, the mean is now zero.
### Plotting a two-dimensional function
One place that broadcasting is very useful is in displaying images based on two-dimensional functions.
If we want to define a function $z = f(x, y)$, broadcasting can be used to compute the function across the grid:
```
# x and y have 50 steps from 0 to 5
x = np.linspace(0, 5, 50)
y = np.linspace(0, 5, 50)[:, np.newaxis]
z = np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x)
```
We'll use Matplotlib to plot this two-dimensional array (these tools will be discussed in full in Density and Contour Plots):
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(z, origin='lower', extent=[0, 5, 0, 5],
cmap='viridis')
plt.colorbar();
```
The result is a compelling visualization of the two-dimensional function.
|
github_jupyter
|
# Hyper parameters
The goal here is to demonstrate how to optimise hyper-parameters of various models
The kernel is a short version of https://www.kaggle.com/mlisovyi/featureengineering-basic-model
```
max_events = None
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # needed for 3D scatter plots
%matplotlib inline
import seaborn as sns
import gc
import warnings
warnings.filterwarnings("ignore")
PATH='../input/'
import os
print(os.listdir(PATH))
```
Read in data
```
train = pd.read_csv('{}/train.csv'.format(PATH), nrows=max_events)
test = pd.read_csv('{}/test.csv'.format(PATH), nrows=max_events)
y = train['Cover_Type']
train.drop('Cover_Type', axis=1, inplace=True)
train.drop('Id', axis=1, inplace=True)
test.drop('Id', axis=1, inplace=True)
print('Train shape: {}'.format(train.shape))
print('Test shape: {}'.format(test.shape))
train.info(verbose=False)
```
## OHE into LE
Helper function to transfer One-Hot Encoding (OHE) into a Label Encoding (LE). It was taken from https://www.kaggle.com/mlisovyi/lighgbm-hyperoptimisation-with-f1-macro
The reason to convert OHE into LE is that we plan to use a tree-based model and such models are dealing well with simple interger-label encoding. Note, that this way we introduce an ordering between categories, which is not there in reality, but in practice in most use cases GBMs handle it well anyway.
```
def convert_OHE2LE(df):
tmp_df = df.copy(deep=True)
for s_ in ['Soil_Type', 'Wilderness_Area']:
cols_s_ = [f_ for f_ in df.columns if f_.startswith(s_)]
sum_ohe = tmp_df[cols_s_].sum(axis=1).unique()
#deal with those OHE, where there is a sum over columns == 0
if 0 in sum_ohe:
print('The OHE in {} is incomplete. A new column will be added before label encoding'
.format(s_))
# dummy colmn name to be added
col_dummy = s_+'_dummy'
# add the column to the dataframe
tmp_df[col_dummy] = (tmp_df[cols_s_].sum(axis=1) == 0).astype(np.int8)
# add the name to the list of columns to be label-encoded
cols_s_.append(col_dummy)
# proof-check, that now the category is complete
sum_ohe = tmp_df[cols_s_].sum(axis=1).unique()
if 0 in sum_ohe:
print("The category completion did not work")
tmp_df[s_ + '_LE'] = tmp_df[cols_s_].idxmax(axis=1).str.replace(s_,'').astype(np.uint16)
tmp_df.drop(cols_s_, axis=1, inplace=True)
return tmp_df
def train_test_apply_func(train_, test_, func_):
xx = pd.concat([train_, test_])
xx_func = func_(xx)
train_ = xx_func.iloc[:train_.shape[0], :]
test_ = xx_func.iloc[train_.shape[0]:, :]
del xx, xx_func
return train_, test_
train_x, test_x = train_test_apply_func(train, test, convert_OHE2LE)
```
One little caveat: looking through the OHE, `Soil_Type 7, 15`, are present in the test, but not in the training data
The head of the training dataset
```
train_x.head()
```
# Let's do some feature engineering
```
def preprocess(df_):
df_['fe_E_Min_02HDtH'] = (df_['Elevation']- df_['Horizontal_Distance_To_Hydrology']*0.2).astype(np.float32)
df_['fe_Distance_To_Hydrology'] = np.sqrt(df_['Horizontal_Distance_To_Hydrology']**2 +
df_['Vertical_Distance_To_Hydrology']**2).astype(np.float32)
feats_sub = [('Elevation_Min_VDtH', 'Elevation', 'Vertical_Distance_To_Hydrology'),
('HD_Hydrology_Min_Roadways', 'Horizontal_Distance_To_Hydrology', 'Horizontal_Distance_To_Roadways'),
('HD_Hydrology_Min_Fire', 'Horizontal_Distance_To_Hydrology', 'Horizontal_Distance_To_Fire_Points')]
feats_add = [('Elevation_Add_VDtH', 'Elevation', 'Vertical_Distance_To_Hydrology')]
for f_new, f1, f2 in feats_sub:
df_['fe_' + f_new] = (df_[f1] - df_[f2]).astype(np.float32)
for f_new, f1, f2 in feats_add:
df_['fe_' + f_new] = (df_[f1] + df_[f2]).astype(np.float32)
# The feature is advertised in https://douglas-fraser.com/forest_cover_management.pdf
df_['fe_Shade9_Mul_VDtH'] = (df_['Hillshade_9am'] * df_['Vertical_Distance_To_Hydrology']).astype(np.float32)
# this mapping comes from https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.info
climatic_zone = {}
geologic_zone = {}
for i in range(1,41):
if i <= 6:
climatic_zone[i] = 2
geologic_zone[i] = 7
elif i <= 8:
climatic_zone[i] = 3
geologic_zone[i] = 5
elif i == 9:
climatic_zone[i] = 4
geologic_zone[i] = 2
elif i <= 13:
climatic_zone[i] = 4
geologic_zone[i] = 7
elif i <= 15:
climatic_zone[i] = 5
geologic_zone[i] = 1
elif i <= 17:
climatic_zone[i] = 6
geologic_zone[i] = 1
elif i == 18:
climatic_zone[i] = 6
geologic_zone[i] = 7
elif i <= 21:
climatic_zone[i] = 7
geologic_zone[i] = 1
elif i <= 23:
climatic_zone[i] = 7
geologic_zone[i] = 2
elif i <= 34:
climatic_zone[i] = 7
geologic_zone[i] = 7
else:
climatic_zone[i] = 8
geologic_zone[i] = 7
df_['Climatic_zone_LE'] = df_['Soil_Type_LE'].map(climatic_zone).astype(np.uint8)
df_['Geologic_zone_LE'] = df_['Soil_Type_LE'].map(geologic_zone).astype(np.uint8)
return df_
train_x = preprocess(train_x)
test_x = preprocess(test_x)
```
# Optimise various classifiers
```
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.linear_model import LogisticRegression
import lightgbm as lgb
```
We subtract 1 to have the labels starting with 0, which is required for LightGBM
```
y = y-1
X_train, X_test, y_train, y_test = train_test_split(train_x, y, test_size=0.15, random_state=315, stratify=y)
```
Parameters to be used in optimisation for various models
```
def learning_rate_decay_power_0995(current_iter):
base_learning_rate = 0.15
lr = base_learning_rate * np.power(.995, current_iter)
return lr if lr > 1e-2 else 1e-2
clfs = {'rf': (RandomForestClassifier(n_estimators=200, max_depth=1, random_state=314, n_jobs=4),
{'max_depth': [20,25,30,35,40,45,50]},
{}),
'xt': (ExtraTreesClassifier(n_estimators=200, max_depth=1, max_features='auto',random_state=314, n_jobs=4),
{'max_depth': [20,25,30,35,40,45,50]},
{}),
'lgbm': (lgb.LGBMClassifier(max_depth=-1, min_child_samples=400,
random_state=314, silent=True, metric='None',
n_jobs=4, n_estimators=5000, learning_rate=0.1),
{'colsample_bytree': [0.75], 'min_child_weight': [0.1,1,10], 'num_leaves': [18, 20,22], 'subsample': [0.75]},
{'eval_set': [(X_test, y_test)],
'eval_metric': 'multi_error', 'verbose':500, 'early_stopping_rounds':100,
'callbacks':[lgb.reset_parameter(learning_rate=learning_rate_decay_power_0995)]}
)
}
gss = {}
for name, (clf, clf_pars, fit_pars) in clfs.items():
print('--------------- {} -----------'.format(name))
gs = GridSearchCV(clf, param_grid=clf_pars,
scoring='accuracy',
cv=5,
n_jobs=1,
refit=True,
verbose=True)
gs = gs.fit(X_train, y_train, **fit_pars)
print('{}: train = {:.4f}, test = {:.4f}+-{:.4f} with best params {}'.format(name,
gs.cv_results_['mean_train_score'][gs.best_index_],
gs.cv_results_['mean_test_score'][gs.best_index_],
gs.cv_results_['std_test_score'][gs.best_index_],
gs.best_params_
))
print("Valid+-Std Train : Parameters")
for i in np.argsort(gs.cv_results_['mean_test_score'])[-5:]:
print('{1:.3f}+-{3:.3f} {2:.3f} : {0}'.format(gs.cv_results_['params'][i],
gs.cv_results_['mean_test_score'][i],
gs.cv_results_['mean_train_score'][i],
gs.cv_results_['std_test_score'][i]))
gss[name] = gs
# gss = {}
# for name, (clf, clf_pars, fit_pars) in clfs.items():
# if name == 'lgbm':
# continue
# print('--------------- {} -----------'.format(name))
# gs = GridSearchCV(clf, param_grid=clf_pars,
# scoring='accuracy',
# cv=5,
# n_jobs=1,
# refit=True,
# verbose=True)
# gs = gs.fit(X_train, y_train, **fit_pars)
# print('{}: train = {:.4f}, test = {:.4f}+-{:.4f} with best params {}'.format(name,
# gs.cv_results_['mean_train_score'][gs.best_index_],
# gs.cv_results_['mean_test_score'][gs.best_index_],
# gs.cv_results_['std_test_score'][gs.best_index_],
# gs.best_params_
# ))
# print("Valid+-Std Train : Parameters")
# for i in np.argsort(gs.cv_results_['mean_test_score'])[-5:]:
# print('{1:.3f}+-{3:.3f} {2:.3f} : {0}'.format(gs.cv_results_['params'][i],
# gs.cv_results_['mean_test_score'][i],
# gs.cv_results_['mean_train_score'][i],
# gs.cv_results_['std_test_score'][i]))
# gss[name] = gs
```
|
github_jupyter
|
```
#hide
#default_exp clean
from nbdev.showdoc import show_doc
#export
import io,sys,json,glob,re
from fastcore.script import call_parse,Param,bool_arg
from fastcore.utils import ifnone
from nbdev.imports import Config
from nbdev.export import nbglob
from pathlib import Path
#hide
#For tests only
from nbdev.imports import *
```
# Clean notebooks
> Strip notebooks from superfluous metadata
To avoid pointless conflicts while working with jupyter notebooks (with different execution counts or cell metadata), it is recommended to clean the notebooks before committing anything (done automatically if you install the git hooks with `nbdev_install_git_hooks`). The following functions are used to do that.
## Utils
```
#export
def rm_execution_count(o):
"Remove execution count in `o`"
if 'execution_count' in o: o['execution_count'] = None
#export
colab_json = "application/vnd.google.colaboratory.intrinsic+json"
def clean_output_data_vnd(o):
"Remove `application/vnd.google.colaboratory.intrinsic+json` in data entries"
if 'data' in o:
data = o['data']
if colab_json in data:
new_data = {k:v for k,v in data.items() if k != colab_json}
o['data'] = new_data
#export
def clean_cell_output(cell):
"Remove execution count in `cell`"
if 'outputs' in cell:
for o in cell['outputs']:
rm_execution_count(o)
clean_output_data_vnd(o)
o.get('metadata', o).pop('tags', None)
#export
cell_metadata_keep = ["hide_input"]
nb_metadata_keep = ["kernelspec", "jekyll", "jupytext", "doc"]
#export
def clean_cell(cell, clear_all=False):
"Clean `cell` by removing superfluous metadata or everything except the input if `clear_all`"
rm_execution_count(cell)
if 'outputs' in cell:
if clear_all: cell['outputs'] = []
else: clean_cell_output(cell)
if cell['source'] == ['']: cell['source'] = []
cell['metadata'] = {} if clear_all else {k:v for k,v in cell['metadata'].items() if k in cell_metadata_keep}
tst = {'cell_type': 'code',
'execution_count': 26,
'metadata': {'hide_input': True, 'meta': 23},
'outputs': [{'execution_count': 2,
'data': {
'application/vnd.google.colaboratory.intrinsic+json': {
'type': 'string'},
'plain/text': ['sample output',]
},
'output': 'super'}],
'source': 'awesome_code'}
tst1 = tst.copy()
clean_cell(tst)
test_eq(tst, {'cell_type': 'code',
'execution_count': None,
'metadata': {'hide_input': True},
'outputs': [{'execution_count': None,
'data': {'plain/text': ['sample output',]},
'output': 'super'}],
'source': 'awesome_code'})
clean_cell(tst1, clear_all=True)
test_eq(tst1, {'cell_type': 'code',
'execution_count': None,
'metadata': {},
'outputs': [],
'source': 'awesome_code'})
tst2 = {
'metadata': {'tags':[]},
'outputs': [{
'metadata': {
'tags':[]
}}],
"source": [
""
]}
clean_cell(tst2, clear_all=False)
test_eq(tst2, {
'metadata': {},
'outputs': [{
'metadata':{}}],
'source': []})
#export
def clean_nb(nb, clear_all=False):
"Clean `nb` from superfluous metadata, passing `clear_all` to `clean_cell`"
for c in nb['cells']: clean_cell(c, clear_all=clear_all)
nb['metadata'] = {k:v for k,v in nb['metadata'].items() if k in nb_metadata_keep }
tst = {'cell_type': 'code',
'execution_count': 26,
'metadata': {'hide_input': True, 'meta': 23},
'outputs': [{'execution_count': 2,
'data': {
'application/vnd.google.colaboratory.intrinsic+json': {
'type': 'string'},
'plain/text': ['sample output',]
},
'output': 'super'}],
'source': 'awesome_code'}
nb = {'metadata': {'kernelspec': 'some_spec', 'jekyll': 'some_meta', 'meta': 37},
'cells': [tst]}
clean_nb(nb)
test_eq(nb['cells'][0], {'cell_type': 'code',
'execution_count': None,
'metadata': {'hide_input': True},
'outputs': [{'execution_count': None,
'data': { 'plain/text': ['sample output',]},
'output': 'super'}],
'source': 'awesome_code'})
test_eq(nb['metadata'], {'kernelspec': 'some_spec', 'jekyll': 'some_meta'})
#export
def _print_output(nb):
"Print `nb` in stdout for git things"
_output_stream = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
x = json.dumps(nb, sort_keys=True, indent=1, ensure_ascii=False)
_output_stream.write(x)
_output_stream.write("\n")
_output_stream.flush()
```
## Main function
```
#export
@call_parse
def nbdev_clean_nbs(fname:Param("A notebook name or glob to convert", str)=None,
clear_all:Param("Clean all metadata and outputs", bool_arg)=False,
disp:Param("Print the cleaned outputs", bool_arg)=False,
read_input_stream:Param("Read input stram and not nb folder")=False):
"Clean all notebooks in `fname` to avoid merge conflicts"
#Git hooks will pass the notebooks in the stdin
if read_input_stream and sys.stdin:
input_stream = io.TextIOWrapper(sys.stdin.buffer, encoding='utf-8')
nb = json.load(input_stream)
clean_nb(nb, clear_all=clear_all)
_print_output(nb)
return
path = None
if fname is None:
try: path = get_config().path("nbs_path")
except Exception as e: path = Path.cwd()
files = nbglob(fname=ifnone(fname,path))
for f in files:
if not str(f).endswith('.ipynb'): continue
nb = json.loads(open(f, 'r', encoding='utf-8').read())
clean_nb(nb, clear_all=clear_all)
if disp: _print_output(nb)
else:
x = json.dumps(nb, sort_keys=True, indent=1, ensure_ascii=False)
with io.open(f, 'w', encoding='utf-8') as f:
f.write(x)
f.write("\n")
```
By default (`fname` left to `None`), the all the notebooks in `lib_folder` are cleaned. You can opt in to fully clean the notebook by removing every bit of metadata and the cell outputs by passing `clear_all=True`. `disp` is only used for internal use with git hooks and will print the clean notebook instead of saving it. Same for `read_input_stream` that will read the notebook from the input stream instead of the file names.
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
|
github_jupyter
|
# openCV Configure for Raspberry PI
What is openCV?
* Collection of computer vision tools in one place
* Computational photography to object detection
Where is openCV?
* http://opencv.org/
What resources did I use?
* http://www.pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/
* http://www.pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/
The step by step of getting it going.
1. Make sure we have enough room.
* ```df -h```
* expand the file system with
* ```sudo raspi-config```
1. Make room with removing the wolfram image
* ```sudo apt-get purge wolfram-engine```
## Install the tools
1. Dependencies
```
sudo apt-get update
sudo apt-get upgrade
```
Make sure all the dev depencies for Python are installed
```
sudo apt-get install python3-dev
sudo apt install python3-matplotlib
```
```
sudo pip3 opencv-contrib-python
```
Scripts
Initial
```
sudo apt-get install build-essential cmake pkg-config
sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libgtk2.0-dev libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran
```
Extras just in case for camera and qt
```
sudo apt-get install libqtgui4
sudo modprobe bcm2835-v4l2
sudo apt-get install libqt4-test
```
Required but not clearly needed until runtime
```
sudo apt-get install libhdf5-dev
sudo apt-get install libhdf5-serial-dev
```
### Old origninal
-----
CMake is needed
```
sudo apt-get install build-essential cmake pkg-config
```
Image file support
```
sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev
```
Video I/O packages
```
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
```
highGUI gto depedencies
```
sudo apt-get install libgtk2.0-dev
```
FORTRAN optimation matrix
```
sudo apt-get install libatlas-base-dev gfortran
```
```
## Get the source code openCV 3.2
Create a directory
```
cd ~
mkdir opencv
```
```
wget -O opencv.zip https://github.com/Itseez/opencv/archive/3.2.0.zip
unzip opencv.zip
```
```
wget -O opencv_contrib.zip https://github.com/Itseez/opencv_contrib/archive/3.2.0.zip
unzip opencv_contrib.zip
```
```
# setup virtualenv
```
sudo pip3 install virtualenv virtualenvwrapper
sudo rm -rf ~/.cache/pip
```
Add this to your .profile
```
# virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
source /usr/local/bin/virtualenvwrapper.sh
```
Create the virtualenv for opencv for python3
```
mkvirtualenv cv -p python3
```
Update the environment
```
source ~/.profile
workon cv
```
Now you are ready to start compiling.
#Set up python in the virtualenv
* Good place to start running tmux
Make sure you see the prompt:
```
(cv) pi@cvpi:~/opencv $
```
Install numpy
```
pip3 install numpy
```
#compile and isntall opencv
* get tmux going
```
workon cv
cd ~/opencv/opencv-3.2.0/
$ mkdir build
$ cd build
$ cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv/opencv_contrib-3.2.0/modules \
-D BUILD_EXAMPLES=ON ..
```
finally nmake it
```
make -j4
```
```
sudo make install
sudo ldconfig
```
```
(cv) pi@cvpi:~/opencv/opencv-3.2.0/build $ ls -l /usr/local/lib/python3.4/site-p
ackages/
total 3212
-rw-r--r-- 1 root staff 3287708 Feb 12 04:35 cv2.cpython-34m.so
```
```
Do you really want to exit ([y]/n)? y [11/1984]
(cv) pi@cvpi:~/opencv/opencv-3.2.0/build $ cd /usr/local/lib/python3.4/site-pack
ages/
(cv) pi@cvpi:/usr/local/lib/python3.4/site-packages $ sudo mv cv2.cpython-34m.so
cv2.so
(cv) pi@cvpi:/usr/local/lib/python3.4/site-packages $ cd ~/.virtualenvs/cv/lib/p
ython3.4/site-packages/
(cv) pi@cvpi:~/.virtualenvs/cv/lib/python3.4/site-packages $ ln -s /usr/local/li
b/python3.4/site-packages/cv2.so cv2.so
(cv) pi@cvpi:~/.virtualenvs/cv/lib/python3.4/site-packages $ source ~/.profile
pi@cvpi:~/.virtualenvs/cv/lib/python3.4/site-packages $ cd
pi@cvpi:~ $ workon cv
```
```
bject? -> Details about 'object', use 'object??' for extra details.
In [1]: import cv2
In [2]: cv2.__version__
Out[2]: '3.2.0'
```
|
github_jupyter
|
<img src="data/photutils_banner.svg">
## Photutils
- Code: https://github.com/astropy/photutils
- Documentation: http://photutils.readthedocs.org/en/stable/
- Issue Tracker: https://github.com/astropy/photutils/issues
## Photutils Overview
- Background and background noise estimation
- Source Detection and Extraction
- DAOFIND and IRAF's starfind
- **Image segmentation**
- local peak finder
- **Aperture photometry**
- PSF photometry
- PSF matching
- Centroids
- Morphological properties
- Elliptical isophote analysis
## Preliminaries
```
# initial imports
import numpy as np
import matplotlib.pyplot as plt
# change some default plotting parameters
import matplotlib as mpl
mpl.rcParams['image.origin'] = 'lower'
mpl.rcParams['image.interpolation'] = 'nearest'
mpl.rcParams['image.cmap'] = 'viridis'
# Run the %matplotlib magic command to enable inline plotting
# in the current notebook. Choose one of these:
%matplotlib inline
# %matplotlib notebook
```
### Load the data
We'll start by reading data and error arrays from FITS files. These are cutouts from the HST Extreme-Deep Field (XDF) taken with WFC3/IR in the F160W filter.
```
from astropy.io import fits
sci_fn = 'data/xdf_hst_wfc3ir_60mas_f160w_sci.fits'
rms_fn = 'data/xdf_hst_wfc3ir_60mas_f160w_rms.fits'
sci_hdulist = fits.open(sci_fn)
rms_hdulist = fits.open(rms_fn)
sci_hdulist[0].header['BUNIT'] = 'electron/s'
```
Print some info about the data.
```
sci_hdulist.info()
```
Define the data and error arrays.
```
data = sci_hdulist[0].data.astype(np.float)
error = rms_hdulist[0].data.astype(np.float)
```
Extract the data header and create a WCS object.
```
from astropy.wcs import WCS
hdr = sci_hdulist[0].header
wcs = WCS(hdr)
```
Display the data.
```
from astropy.visualization import simple_norm
norm = simple_norm(data, 'sqrt', percent=99.5)
plt.imshow(data, norm=norm)
plt.title('XDF F160W Cutout')
```
## Part 1: Aperture Photometry
Photutils provides circular, elliptical, and rectangular aperture shapes (plus annulus versions of each). These are names of the aperture classes, defined in pixel coordinates:
* `CircularAperture`
* `CircularAnnulus`
* `EllipticalAperture`
* `EllipticalAnnulus`
* `RectangularAperture`
* `RectangularAnnulus`
Along with variants of each, defined in celestial coordinates:
* `SkyCircularAperture`
* `SkyCircularAnnulus`
* `SkyEllipticalAperture`
* `SkyEllipticalAnnulus`
* `SkyRectangularAperture`
* `SkyRectangularAnnulus`
## Methods for handling aperture/pixel intersection
In general, the apertures will only partially overlap some of the pixels in the data.
There are three methods for handling the aperture overlap with the pixel grid of the data array.
<img src="data/photutils_aperture_methods.svg">
NOTE: the `subpixels` keyword is ignored for the **'exact'** and **'center'** methods.
### Perform circular-aperture photometry on some sources in the XDF
First, we define a circular aperture at a given position and radius (in pixels).
```
from photutils import CircularAperture
position = (90.73, 59.43) # (x, y) pixel position
radius = 5. # pixels
aperture = CircularAperture(position, r=radius)
aperture
print(aperture)
```
We can plot the aperture on the data using the aperture `plot()` method:
```
plt.imshow(data, norm=norm)
aperture.plot(color='red', lw=2)
```
Now let's perform photometry on the data using the `aperture_photometry()` function. **The default aperture method is 'exact'.**
Also note that the input data is assumed to have zero background. If that is not the case, please see the documentation for the `photutils.background` subpackage for tools to help subtract the background.
See the `photutils_local_background.ipynb` notebook for examples of local background subtraction.
The background was already subtracted for our XDF example data.
```
from photutils import aperture_photometry
phot = aperture_photometry(data, aperture)
phot
```
The output is an Astropy `QTable` (Quantity Table) with sum of data values within the aperture (using the defined pixel overlap method).
The table also contains metadata, which is accessed by the `meta` attribute of the table. The metadata is stored as a python (ordered) dictionary:
```
phot.meta
phot.meta['version']
```
Aperture photometry using the **'center'** method gives a slightly different (and less accurate) answer:
```
phot = aperture_photometry(data, aperture, method='center')
phot
```
Now perform aperture photometry using the **'subpixel'** method with `subpixels=5`:
These parameters are equivalent to SExtractor aperture photometry.
```
phot = aperture_photometry(data, aperture, method='subpixel', subpixels=5)
phot
```
## Photometric Errors
We can also input an error array to get the photometric errors.
```
phot = aperture_photometry(data, aperture, error=error)
phot
```
The error array in our XDF FITS file represents only the background error. If we want to include the Poisson error of the source we need to calculate the **total** error:
$\sigma_{\mathrm{tot}} = \sqrt{\sigma_{\mathrm{b}}^2 +
\frac{I}{g}}$
where $\sigma_{\mathrm{b}}$ is the background-only error,
$I$ are the data values, and $g$ is the "effective gain".
The "effective gain" is the value (or an array if it's variable across an image) needed to convert the data image to count units (e.g. electrons or photons), where Poisson statistics apply.
Photutils provides a `calc_total_error()` function to perform this calculation.
```
# this time include the Poisson error of the source
from photutils.utils import calc_total_error
# our data array is in units of e-/s
# so the "effective gain" should be the exposure time
eff_gain = hdr['TEXPTIME']
tot_error = calc_total_error(data, error, eff_gain)
phot = aperture_photometry(data, aperture, error=tot_error)
phot
```
The total error increased only slightly because this is a small faint source.
## Units
We can also input the data (and error) units via the `unit` keyword.
```
# input the data units
import astropy.units as u
unit = u.electron / u.s
phot = aperture_photometry(data, aperture, error=tot_error, unit=unit)
phot
phot['aperture_sum']
```
Instead of inputting units via the units keyword, `Quantity` inputs for data and error are also allowed.
```
phot = aperture_photometry(data * unit, aperture, error=tot_error * u.adu)
phot
```
The `unit` will not override the data or error unit.
```
phot = aperture_photometry(data * unit, aperture, error=tot_error * u.adu, unit=u.photon)
phot
```
## Performing aperture photometry at multiple positions
Now let's perform aperture photometry for three sources (all with the same aperture size). We simply define three (x, y) positions.
```
positions = [(90.73, 59.43), (73.63, 139.41), (43.62, 61.63)]
radius = 5.
apertures = CircularAperture(positions, r=radius)
```
Let's plot these three apertures on the data.
```
plt.imshow(data, norm=norm)
apertures.plot(color='red', lw=2)
```
Now let's perform aperture photometry.
```
phot = aperture_photometry(data, apertures, error=tot_error, unit=unit)
phot
```
Each source is a row in the table and is given a unique **id** (the first column).
## Adding columns to the photometry table
We can add columns to the photometry table. Let's calculate the signal-to-noise (SNR) ratio of our sources and add it as a new column to the table.
```
snr = phot['aperture_sum'] / phot['aperture_sum_err'] # units will cancel
phot['snr'] = snr
phot
```
Now calculate the F160W AB magnitude and add it to the table.
```
f160w_zpt = 25.9463
# NOTE that the log10() function can be applied only to dimensionless quantities
# so we use the value() method to get the number value of the aperture sum
abmag = -2.5 * np.log10(phot['aperture_sum'].value) + f160w_zpt
phot['abmag'] = abmag
phot
```
Now, using the WCS defined above, calculate the sky coordinates for these objects and add it to the table.
```
from astropy.wcs.utils import pixel_to_skycoord
# convert pixel positions to sky coordinates
x, y = np.transpose(positions)
coord = pixel_to_skycoord(x, y, wcs)
# we can add the astropy SkyCoord object directly to the table
phot['sky coord'] = coord
phot
```
We can also add separate RA and Dec columns, if preferred.
```
phot['ra_icrs'] = coord.icrs.ra
phot['dec_icrs'] = coord.icrs.dec
phot
```
If we write the table to an ASCII file using the ECSV format we can read it back in preserving all of the units, metadata, and SkyCoord objects.
```
phot.write('my_photometry.txt', format='ascii.ecsv')
# view the table on disk
!cat my_photometry.txt
```
Now read the table in ECSV format.
```
from astropy.table import QTable
tbl = QTable.read('my_photometry.txt', format='ascii.ecsv')
tbl
tbl.meta
tbl['aperture_sum'] # Quantity array
tbl['sky coord'] # SkyCoord array
```
## Aperture photometry using Sky apertures
First, let's define the sky coordinates by converting our pixel coordinates.
```
positions = [(90.73, 59.43), (73.63, 139.41), (43.62, 61.63)]
x, y = np.transpose(positions)
coord = pixel_to_skycoord(x, y, wcs)
coord
```
Now define circular apertures in sky coordinates.
For sky apertures, the aperture radius must be a `Quantity`, in either pixel or angular units.
```
from photutils import SkyCircularAperture
radius = 5. * u.pix
sky_apers = SkyCircularAperture(coord, r=radius)
sky_apers.r
radius = 0.5 * u.arcsec
sky_apers = SkyCircularAperture(coord, r=radius)
sky_apers.r
```
When using a sky aperture in angular units, `aperture_photometry` needs the WCS transformation, which can be provided in two ways.
```
# via the wcs keyword
phot = aperture_photometry(data, sky_apers, wcs=wcs)
phot
# or via a FITS hdu (i.e. header and data) as the input "data"
phot = aperture_photometry(sci_hdulist[0], sky_apers)
phot
```
## More on Aperture Photometry in the Extended notebook:
- Bad pixel masking
- Encircled flux
- Aperture photometry at multiple positions using multiple apertures
Also see the local background subtraction notebook (`photutils_local_backgrounds.ipynb`).
## Part 2: Image Segmentation
Image segmentation is the process where sources are identified and labeled in an image.
The sources are detected by using a S/N threshold level and defining the minimum number of pixels required within a source.
First, let's define a threshold image at 2$\sigma$ (per pixel) above the background.
```
bkg = 0. # background level in this image
nsigma = 2.
threshold = bkg + (nsigma * error) # this should be background-only error
```
Now let's detect "8-connected" sources of minimum size 5 pixels where each pixel is 2$\sigma$ above the background.
"8-connected" pixels touch along their edges or corners. "4-connected" pixels touch along their edges. For reference, SExtractor uses "8-connected" pixels.
The result is a segmentation image (`SegmentationImage` object). The segmentation image is the isophotal footprint of each source above the threshold.
```
from photutils import detect_sources
npixels = 5
segm = detect_sources(data, threshold, npixels)
print('Found {0} sources'.format(segm.nlabels))
```
Display the segmentation image.
```
from photutils.utils import random_cmap
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 8))
ax1.imshow(data, norm=norm)
lbl1 = ax1.set_title('Data')
ax2.imshow(segm, cmap=segm.cmap())
lbl2 = ax2.set_title('Segmentation Image')
```
It is better to filter (smooth) the data prior to source detection.
Let's use a 5x5 Gaussian kernel with a FWHM of 2 pixels.
```
from astropy.convolution import Gaussian2DKernel
from astropy.stats import gaussian_fwhm_to_sigma
sigma = 2.0 * gaussian_fwhm_to_sigma # FWHM = 2 pixels
kernel = Gaussian2DKernel(sigma, x_size=5, y_size=5)
kernel.normalize()
ssegm = detect_sources(data, threshold, npixels, filter_kernel=kernel)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 8))
ax1.imshow(segm, cmap=segm.cmap())
lbl1 = ax1.set_title('Original Data')
ax2.imshow(ssegm, cmap=ssegm.cmap())
lbl2 = ax2.set_title('Smoothed Data')
```
### Source deblending
Note above that some of our detected sources were blended. We can deblend them using the `deblend_sources()` function, which uses a combination of multi-thresholding and watershed segmentation.
```
from photutils import deblend_sources
segm2 = deblend_sources(data, ssegm, npixels, filter_kernel=kernel,
contrast=0.001, nlevels=32)
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(15, 8))
ax1.imshow(data, norm=norm)
ax1.set_title('Data')
ax2.imshow(ssegm, cmap=ssegm.cmap())
ax2.set_title('Original Segmentation Image')
ax3.imshow(segm2, cmap=segm2.cmap())
ax3.set_title('Deblended Segmentation Image')
print('Found {0} sources'.format(segm2.max))
```
## Measure the photometry and morphological properties of detected sources
```
from photutils import source_properties
catalog = source_properties(data, segm2, error=error, wcs=wcs)
```
`catalog` is a `SourceCatalog` object. It behaves like a list of `SourceProperties` objects, one for each source.
```
catalog
catalog[0] # the first source
catalog[0].xcentroid # the xcentroid of the first source
```
Please go [here](http://photutils.readthedocs.org/en/latest/api/photutils.segmentation.SourceProperties.html#photutils.segmentation.SourceProperties) to see the complete list of available source properties.
We can create a Table of isophotal photometry and morphological properties using the ``to_table()`` method of `SourceCatalog`:
```
tbl = catalog.to_table()
tbl
```
Additional properties (not stored in the table) can be accessed directly via the `SourceCatalog` object.
```
# get a single object (id=12)
obj = catalog[11]
obj.id
obj
```
Let's plot the cutouts of the data and error images for this source.
```
fig, ax = plt.subplots(figsize=(12, 8), ncols=3)
ax[0].imshow(obj.make_cutout(segm2.data))
ax[0].set_title('Source id={} Segment'.format(obj.id))
ax[1].imshow(obj.data_cutout_ma)
ax[1].set_title('Source id={} Data'.format(obj.id))
ax[2].imshow(obj.error_cutout_ma)
ax[2].set_title('Source id={} Error'.format(obj.id))
```
## More on Image Segmentation in the Extended notebook:
- Define a subset of source labels
- Define a subset of source properties
- Additional sources properties, such a cutout images
- Define the approximate isophotal ellipses for each source
## Also see the two notebooks on Photutils PSF-fitting photometry:
- `gaussian_psf_photometry.ipynb`
- `image_psf_photometry_withNIRCam.ipynb`
|
github_jupyter
|
# Extracting condtion-specific trials
The aim of this section is to extract the trials according to the trigger channel. We will explain how the events can be generated from the stimulus channels and how to extract condition specific trials (epochs). Once the trials are extracted, bad epochs will be identified and excluded on based on their peak-to-peak signal amplitude.
## Preparation
Import the relevant Python modules:
```
import os.path as op
import os
import sys
import numpy as np
import mne
import matplotlib.pyplot as plt
```
Set the paths for the data and results. Note that these will depend on your local setup.
```
data_path = r'C:\Users\JensenO\Dropbox\FLUX\Development\dataRaw'
result_path = r'C:\Users\JensenO\Dropbox\FLUX\Development\dataResults'
file_name = 'training_raw'
```
## Reading the events from the stimulus channels
First read all the events from the stimulus channel (in our case, STI01). We will loop over the 2 fif-files created in the previous step.
```
for subfile in range(1, 3):
path_file = os.path.join(result_path,file_name + 'ica-' + str(subfile) + '.fif')
raw = mne.io.read_raw_fif(path_file,allow_maxshield=True,verbose=True,preload=True)
events = mne.find_events(raw, stim_channel='STI101',min_duration=0.001001)
# Save the events in a dedicted FIF-file:
filename_events = op.join(result_path,file_name + 'eve-' + str(subfile) +'.fif')
mne.write_events(filename_events,events)
```
The code above extract the events from the trigger channel STI101. This results are represented in the array *events* where the first column is the sample and the third column the corresponding trigger value. Note that the events are concatenated across the 2 subfiles.
To visualize a snippet of the events-array write:
```
%matplotlib qt
plt.stem(events[:,0],events[:,2])
plt.xlim(1950000,2000000)
plt.xlabel('samples')
plt.ylabel('Trigger value (STI101)')
plt.show()
```
The figures shows an example for part of the events array. The trigger values indicate specific events of the trials. Here the 'attend left' trials are coded with the trigger '21', whereas the 'attend right' trials with '22'.
## Defining the epochs (trials) according to the event values
Next step is to extract the left and right trials
```
events_id = {'left':21,'right':22}
raw_list = list()
events_list = list()
for subfile in range(1, 3):
# Read in the data from the Result path
path_file = os.path.join(result_path,file_name + 'ica-' + str(subfile) + '.fif')
raw = mne.io.read_raw_fif(path_file, allow_maxshield=True,verbose=True)
filename_events = op.join(result_path,file_name + 'eve-' + str(subfile) +'.fif')
events = mne.read_events(filename_events, verbose=True)
raw_list.append(raw)
events_list.append(events)
```
Now concatenate raw instances as if they were continuous - i.e combine over the 2 subfiles.
```
raw, events = mne.concatenate_raws(raw_list,events_list=events_list)
del raw_list
```
Set the peak-to-peak amplitude thresholds for trial rejection. These values may change depending on the quality of the data.
```
reject = dict(grad=5000e-13, # T / m (gradiometers)
mag=5e-12, # T (magnetometers)
#eeg=200e-6, # V (EEG channels)
#eog=150e-6 # V (EOG channels)
)
```
We will use time-windows of interest starting 2.5 s prior to the stimulus onset and ending 2 s after. Now perform the epoching using the events and events_id as well as the selected channels:
```
epochs = mne.Epochs(raw,
events, events_id,
tmin=-2.5 , tmax=2,
baseline=None,
proj=True,
picks = 'all',
detrend = 1,
reject=reject,
reject_by_annotation=True,
preload=True,
verbose=True)
# Show epochs details
epochs
```
By calling *epochs* we can check that the number of events is 305 of which 152 are left attention trials and 153 right attention trials. Moreover, we can see that no baseline correction was applied at this stage.
Now we plot an overview of the rejected epochs:
```
epochs.plot_drop_log();
```
A few percent of the trials were rejected due to MEG artifacts in the magnetometers.
Now we save the epoched data in an FIF-file. Note this file will include trials from the 2 subfiles.
```
path_outfile = os.path.join(result_path,'training_epo.fif')
epochs.save(path_outfile,overwrite=True)
```
## Plotting the trials
To show the trials for the left-condition for the MEG gradiometers write:
```
%matplotlib inline
epochs.plot(n_epochs=10,picks=['grad'],event_id={'left':21});
```
The plot above shows 10 trials of type left; only gradiometers shown.
To show the trigger (stimulus channels) write:
```
%matplotlib inline
epochs.plot(n_epochs=1,picks=['stim'],event_id={'left': 21});
```
An example of the trigger channels for one trial.
Showing the trigger channels is often useful for verifying that correct trials have been selected. Note that STI001 to STI016 denote the individual trigger lines which are 'on' (1) or 'off' (0). The channel STI101 is a combination of the trigger lines ( STI101 = STI001 + 2 * STI002 + 4 * STI003 + 8 * STI004 + ...)
To show all the trials belonging to *left* for a representative gradiometer (MEG2343) use the plot_image function. In the following example we also lowpass filter the indvidual trials at 30 Hz and shorten them (crop) to a -100 to 400 ms interval:
```
%matplotlib inline
epochs['left'].filter(0.0,30).crop(-0.1,0.4).plot_image(picks=['MEG2343'],vmin=-150,vmax=150);
```
## Preregistration and publications
Publication, example:
"The data were segmented into intervals of 4.5 s, ranging from 2.5 s prior to stimulus onset and 2 s after. To ensure that no artefacts were missed, trials in which the gradiometers values exceeded 5000 fT/cm or magnetometers exceeded 5000 fT were rejected as well as trials previously annotated with muscle artefacts."
|
github_jupyter
|
# cadCAD Tutorials: The Robot and the Marbles, part 3
In parts [1](../robot-marbles-part-1/robot-marbles-part-1.ipynb) and [2](../robot-marbles-part-2/robot-marbles-part-2.ipynb) we introduced the 'language' in which a system must be described in order for it to be interpretable by cadCAD and some of the basic concepts of the library:
* State Variables
* Timestep
* State Update Functions
* Partial State Update Blocks
* Simulation Configuration Parameters
* Policies
In this notebook we'll look at how subsystems within a system can operate in different frequencies. But first let's copy the base configuration with which we ended Part 2. Here's the description of that system:
__The robot and the marbles__
* Picture a box (`box_A`) with ten marbles in it; an empty box (`box_B`) next to the first one; and __two__ robot arms capable of taking a marble from any one of the boxes and dropping it into the other one.
* The robots are programmed to take one marble at a time from the box containing the largest number of marbles and drop it in the other box. They repeat that process until the boxes contain an equal number of marbles.
* The robots act simultaneously; in other words, they assess the state of the system at the exact same time, and decide what their action will be based on that information.
```
%%capture
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# List of all the state variables in the system and their initial values
genesis_states = {
'box_A': 10, # as per the description of the example, box_A starts out with 10 marbles in it
'box_B': 0 # as per the description of the example, box_B starts out empty
}
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Settings of general simulation parameters, unrelated to the system itself
# `T` is a range with the number of discrete units of time the simulation will run for;
# `N` is the number of times the simulation will be run (Monte Carlo runs)
# In this example, we'll run the simulation once (N=1) and its duration will be of 10 timesteps
# We'll cover the `M` key in a future article. For now, let's omit it
sim_config_dict = {
'T': range(10),
'N': 1,
#'M': {}
}
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# We specify the robot arm's logic in a Policy Function
def robot_arm(params, step, sH, s):
add_to_A = 0
if (s['box_A'] > s['box_B']):
add_to_A = -1
elif (s['box_A'] < s['box_B']):
add_to_A = 1
return({'add_to_A': add_to_A, 'add_to_B': -add_to_A})
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# We make the state update functions less "intelligent",
# ie. they simply add the number of marbles specified in _input
# (which, per the policy function definition, may be negative)
def increment_A(params, step, sH, s, _input):
y = 'box_A'
x = s['box_A'] + _input['add_to_A']
return (y, x)
def increment_B(params, step, sH, s, _input):
y = 'box_B'
x = s['box_B'] + _input['add_to_B']
return (y, x)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# In the Partial State Update Blocks,
# the user specifies if state update functions will be run in series or in parallel
# and the policy functions that will be evaluated in that block
partial_state_update_blocks = [
{
'policies': { # The following policy functions will be evaluated and their returns will be passed to the state update functions
'robot_arm_1': robot_arm,
'robot_arm_2': robot_arm
},
'variables': { # The following state variables will be updated simultaneously
'box_A': increment_A,
'box_B': increment_B
}
}
]
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#imported some addition utilities to help with configuration set-up
from cadCAD.configuration.utils import config_sim
from cadCAD.configuration import Experiment
from cadCAD import configs
del configs[:] # Clear any prior configs
exp = Experiment()
c = config_sim(sim_config_dict)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The configurations above are then packaged into a `Configuration` object
exp.append_configs(initial_state=genesis_states, #dict containing variable names and initial values
partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
sim_configs=c #preprocessed dictionaries containing simulation parameters
)
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
exec_mode = ExecutionMode()
local_mode_ctx = ExecutionContext(exec_mode.local_mode)
simulation = Executor(local_mode_ctx, configs) # Pass the configuration object inside an array
raw_result, tensor, sessions = simulation.execute() # The `execute()` method returns a tuple; its first elements contains the raw results
%matplotlib inline
import pandas as pd
df = pd.DataFrame(raw_result)
df.plot('timestep', ['box_A', 'box_B'], grid=True,
xticks=list(df['timestep'].drop_duplicates()),
colormap = 'RdYlGn',
yticks=list(range(1+(df['box_A']+df['box_B']).max())));
```
# Asynchronous Subsystems
We have defined that the robots operate simultaneously on the boxes of marbles. But it is often the case that agents within a system operate asynchronously, each having their own operation frequencies or conditions.
Suppose that instead of acting simultaneously, the robots in our examples operated in the following manner:
* Robot 1: acts once every 2 timesteps
* Robot 2: acts once every 3 timesteps
One way to simulate the system with this change is to introduce a check of the current timestep before the robots act, with the definition of separate policy functions for each robot arm.
```
%%capture
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# We specify each of the robots logic in a Policy Function
robots_periods = [2,3] # Robot 1 acts once every 2 timesteps; Robot 2 acts once every 3 timesteps
def get_current_timestep(cur_substep, s):
if cur_substep == 1:
return s['timestep']+1
return s['timestep']
def robot_arm_1(params, step, sH, s):
_robotId = 1
if get_current_timestep(step, s)%robots_periods[_robotId-1]==0: # on timesteps that are multiple of 2, Robot 1 acts
return robot_arm(params, step, sH, s)
else:
return({'add_to_A': 0, 'add_to_B': 0}) # for all other timesteps, Robot 1 doesn't interfere with the system
def robot_arm_2(params, step, sH, s):
_robotId = 2
if get_current_timestep(step, s)%robots_periods[_robotId-1]==0: # on timesteps that are multiple of 3, Robot 2 acts
return robot_arm(params, step, sH, s)
else:
return({'add_to_A': 0, 'add_to_B': 0}) # for all other timesteps, Robot 2 doesn't interfere with the system
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# In the Partial State Update Blocks,
# the user specifies if state update functions will be run in series or in parallel
# and the policy functions that will be evaluated in that block
partial_state_update_blocks = [
{
'policies': { # The following policy functions will be evaluated and their returns will be passed to the state update functions
'robot_arm_1': robot_arm_1,
'robot_arm_2': robot_arm_2
},
'variables': { # The following state variables will be updated simultaneously
'box_A': increment_A,
'box_B': increment_B
}
}
]
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
del configs[:] # Clear any prior configs
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The configurations above are then packaged into a `Configuration` object
exp.append_configs(initial_state=genesis_states, #dict containing variable names and initial values
partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
sim_configs=c #preprocessed dictionaries containing simulation parameters
)
executor = Executor(local_mode_ctx, configs) # Pass the configuration object inside an array
raw_result, tensor, sessions = executor.execute() # The `execute()` method returns a tuple; its first elements contains the raw results
simulation_result = pd.DataFrame(raw_result)
simulation_result.plot('timestep', ['box_A', 'box_B'],
grid=True,
xticks=list(simulation_result['timestep'].drop_duplicates()),
yticks=list(range(1+max(simulation_result['box_A'].max(),simulation_result['box_B'].max()))),
colormap = 'RdYlGn'
)
```
Let's take a step-by-step look at what the simulation tells us:
* Timestep 1: the number of marbles in the boxes does not change, as none of the robots act
* Timestep 2: Robot 1 acts, Robot 2 doesn't; resulting in one marble being moved from box A to box B
* Timestep 3: Robot 2 acts, Robot 1 doesn't; resulting in one marble being moved from box A to box B
* Timestep 4: Robot 1 acts, Robot 2 doesn't; resulting in one marble being moved from box A to box B
* Timestep 5: the number of marbles in the boxes does not change, as none of the robots act
* Timestep 6: Robots 1 __and__ 2 act, as 6 is a multiple of 2 __and__ 3; resulting in two marbles being moved from box A to box B and an equilibrium being reached.
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
## Introduction
Because of the relational structure in a graph,
we can begin to think about "importance" of a node
that is induced because of its relationships
to the rest of the nodes in the graph.
Before we go on, let's think about
a pertinent and contemporary example.
### An example: contact tracing
At the time of writing (April 2020),
finding important nodes in a graph has actually taken on a measure of importance
that we might not have appreciated before.
With the COVID-19 virus spreading,
contact tracing has become quite important.
In an infectious disease contact network,
where individuals are nodes and
contact between individuals of some kind are the edges,
an "important" node in this contact network
would be an individual who was infected
who also was in contact with many people
during the time that they were infected.
### Our dataset: "Sociopatterns"
The dataset that we will use in this chapter is the "[sociopatterns network][sociopatterns]" dataset.
Incidentally, it's also about infectious diseases.
[sociopatterns]: http://konect.uni-koblenz.de/networks/sociopatterns-infectious
Here is the description of the dataset.
> This network describes the face-to-face behavior of people
> during the exhibition INFECTIOUS: STAY AWAY in 2009
> at the Science Gallery in Dublin.
> Nodes represent exhibition visitors;
> edges represent face-to-face contacts that were active for at least 20 seconds.
> Multiple edges between two nodes are possible and denote multiple contacts.
> The network contains the data from the day with the most interactions.
To simplify the network, we have represented only the last contact between individuals.
```
from nams import load_data as cf
G = cf.load_sociopatterns_network()
```
It is loaded as an undirected graph object:
```
type(G)
```
As usual, before proceeding with any analysis,
we should know basic graph statistics.
```
len(G.nodes()), len(G.edges())
```
## A Measure of Importance: "Number of Neighbors"
One measure of importance of a node is
the number of **neighbors** that the node has.
What is a **neighbor**?
We will work with the following definition:
> The neighbor of a node is connected to that node by an edge.
Let's explore this concept, using the NetworkX API.
Every NetworkX graph provides a `G.neighbors(node)` class method,
which lets us query a graph for the number of neighbors
of a given node:
```
G.neighbors(7)
```
It returns a generator that doesn't immediately return
the exact neighbors list.
This means we cannot know its exact length,
as it is a generator.
If you tried to do:
```python
len(G.neighbors(7))
```
you would get the following error:
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-72c56971d077> in <module>
----> 1 len(G.neighbors(7))
TypeError: object of type 'dict_keyiterator' has no len()
```
Hence, we will need to cast it as a list in order to know
both its length
and its members:
```
list(G.neighbors(7))
```
In the event that some nodes have an extensive list of neighbors,
then using the `dict_keyiterator` is potentially a good memory-saving technique,
as it lazily yields the neighbors.
### Exercise: Rank-ordering the number of neighbors a node has
Since we know how to get the list of nodes that are neighbors of a given node,
try this following exercise:
> Can you create a ranked list of the importance of each individual, based on the number of neighbors they have?
Here are a few hints to help:
- You could consider using a `pandas Series`. This would be a modern and idiomatic way of approaching the problem.
- You could also consider using Python's `sorted` function.
```
from nams.solutions.hubs import rank_ordered_neighbors
#### REPLACE THE NEXT FEW LINES WITH YOUR ANSWER
# answer = rank_ordered_neighbors(G)
# answer
```
The original implementation looked like the following
```
from nams.solutions.hubs import rank_ordered_neighbors_original
# rank_ordered_neighbors_original??
```
And another implementation that uses generators:
```
from nams.solutions.hubs import rank_ordered_neighbors_generator
# rank_ordered_neighbors_generator??
```
## Generalizing "neighbors" to arbitrarily-sized graphs
The concept of neighbors is simple and appealing,
but it leaves us with a slight point of dissatisfaction:
it is difficult to compare graphs of different sizes.
Is a node more important solely because it has more neighbors?
What if it were situated in an extremely large graph?
Would we not expect it to have more neighbors?
As such, we need a normalization factor.
One reasonable one, in fact, is
_the number of nodes that a given node could **possibly** be connected to._
By taking the ratio of the number of neighbors a node has
to the number of neighbors it could possibly have,
we get the **degree centrality** metric.
Formally defined, the degree centrality of a node (let's call it $d$)
is the number of neighbors that a node has (let's call it $n$)
divided by the number of neighbors it could _possibly_ have (let's call it $N$):
$$d = \frac{n}{N}$$
NetworkX provides a function for us to calculate degree centrality conveniently:
```
import networkx as nx
import pandas as pd
dcs = pd.Series(nx.degree_centrality(G))
dcs
```
`nx.degree_centrality(G)` returns to us a dictionary of key-value pairs,
where the keys are node IDs
and values are the degree centrality score.
To save on output length, I took the liberty of casting it as a pandas Series
to make it easier to display.
Incidentally, we can also sort the series
to find the nodes with the highest degree centralities:
```
dcs.sort_values(ascending=False)
```
Does the list order look familiar?
It should, since the numerator of the degree centrality metric
is identical to the number of neighbors,
and the denominator is a constant.
## Distribution of graph metrics
One important concept that you should come to know
is that the distribution of node-centric values
can characterize classes of graphs.
What do we mean by "distribution of node-centric values"?
One would be the degree distribution,
that is, the collection of node degree values in a graph.
Generally, you might be familiar with plotting a histogram
to visualize distributions of values,
but in this book, we are going to avoid histograms like the plague.
I detail a lot of reasons in a [blog post][ecdf] I wrote in 2018,
but the main points are that:
1. It's easier to lie with histograms.
1. You get informative statistical information (median, IQR, extremes/outliers)
more easily.
[ecdf]: https://ericmjl.github.io/blog/2018/7/14/ecdfs/
### Exercise: Degree distribution
In this next exercise, we are going to get practice visualizing these values
using empirical cumulative distribution function plots.
I have written for you an ECDF function that you can use already.
Its API looks like the following:
```python
x, y = ecdf(list_of_values)
```
giving you `x` and `y` values that you can directly plot.
The exercise prompt is this:
> Plot the ECDF of the degree centrality and degree distributions.
First do it for **degree centrality**:
```
from nams.functions import ecdf
from nams.solutions.hubs import ecdf_degree_centrality
#### REPLACE THE FUNCTION CALL WITH YOUR ANSWER
ecdf_degree_centrality(G)
```
Now do it for **degree**:
```
from nams.solutions.hubs import ecdf_degree
#### REPLACE THE FUNCTION CALL WITH YOUR ANSWER
ecdf_degree(G)
```
The fact that they are identically-shaped
should not surprise you!
### Exercise: What about that denominator?
The denominator $N$ in the degree centrality definition
is "the number of nodes that a node could _possibly_ be connected to".
Can you think of two ways $N$ be defined?
```
from nams.solutions.hubs import num_possible_neighbors
#### UNCOMMENT TO SEE MY ANSWER
# print(num_possible_neighbors())
```
### Exercise: Circos Plotting
Let's get some practice with the `nxviz` API.
> Visualize the graph `G`, while ordering and colouring them by the 'order' node attribute.
```
from nams.solutions.hubs import circos_plot
#### REPLACE THE NEXT LINE WITH YOUR ANSWER
circos_plot(G)
```
### Exercise: Visual insights
Since we know that node colour and order
are by the "order" in which the person entered into the exhibit,
what does this visualization tell you?
```
from nams.solutions.hubs import visual_insights
#### UNCOMMENT THE NEXT LINE TO SEE MY ANSWER
# print(visual_insights())
```
### Exercise: Investigating degree centrality and node order
One of the insights that we might have gleaned from visualizing the graph
is that the nodes that have a high degree centrality
might also be responsible for the edges that criss-cross the Circos plot.
To test this, plot the following:
- x-axis: node degree centrality
- y-axis: maximum difference between the neighbors' `order`s (a node attribute) and the node's `order`.
```
from nams.solutions.hubs import dc_node_order
dc_node_order(G)
```
The somewhat positive correlation between the degree centrality might tell us that this trend holds true.
A further applied question would be to ask what behaviour of these nodes would give rise to this pattern.
Are these nodes actually exhibit staff?
Or is there some other reason why they are staying so long?
This, of course, would require joining in further information
that we would overlay on top of the graph
(by adding them as node or edge attributes)
before we might make further statements.
## Reflections
In this chapter, we defined a metric of node importance: the degree centrality metric.
In the example we looked at, it could help us identify
potential infectious agent superspreaders in a disease contact network.
In other settings, it might help us spot:
- message amplifiers/influencers in a social network, and
- potentially crowded airports that have lots of connections into and out of it (still relevant to infectious disease spread!)
- and many more!
What other settings can you think of in which the number of neighbors that a node has can become
a metric of importance for the node?
## Solutions
Here are the solutions to the exercises above.
```
from nams.solutions import hubs
import inspect
print(inspect.getsource(hubs))
```
|
github_jupyter
|
# Tutorial 08: Creating Custom Environments 创建自定义环境
This tutorial walks you through the process of creating custom environments in Flow. Custom environments contain specific methods that define the problem space of a task, such as the state and action spaces of the RL agent and the signal (or reward) that the RL algorithm will optimize over. By specifying a few methods within a custom environment, individuals can use Flow to design traffic control tasks of various types, such as optimal traffic light signal timing and flow regulation via mixed autonomy traffic (see the figures below). Finally, these environments are compatible with OpenAI Gym.
本教程将带您完成在Flow中创建自定义环境的过程。自定义环境包含定义任务的问题空间的特定方法,例如RL代理的状态和操作空间,以及RL算法将优化的信号(或奖励)。通过在自定义环境中指定一些方法,个人可以使用流来设计各种类型的交通控制任务,例如最优的交通灯信号定时和混合自主交通的流量调节(见下图)。最后,这些环境与OpenAI健身房是兼容的。
The rest of the tutorial is organized as follows: in section 1 walks through the process of creating an environment for mixed autonomy vehicle control where the autonomous vehicles perceive all vehicles in the network, and section two implements the environment in simulation.
本教程的其余部分组织如下:第1节介绍了创建混合自主车辆控制环境的过程,其中自主车辆感知网络中的所有车辆,第2节在仿真中实现了该环境。
<img src="img/sample_envs.png">
## 1. Creating an Environment Class 创建一个环境类
In this tutorial we will create an environment in which the accelerations of a handful of vehicles in the network are specified by a single centralized agent, with the objective of the agent being to improve the average speed of all vehicle in the network. In order to create this environment, we begin by inheriting the base environment class located in *flow.envs*:
在本教程中,我们将创建一个环境,其中网络中少数车辆的加速由一个集中的代理指定,代理的目标是提高网络中所有车辆的平均速度。为了创建这样的环境,我们从继承位于*flow.envs*中的基本环境类开始:
```
# import the base environment class
from flow.envs import Env
# define the environment class, and inherit properties from the base environment class
class myEnv(Env):
pass
```
`Env` provides the interface for running and modifying a SUMO simulation. Using this class, we are able to start sumo, provide a network to specify a configuration and controllers, perform simulation steps, and reset the simulation to an initial configuration.
“Env”提供了运行和修改sumo模拟的接口。使用这个类,我们可以启动sumo,提供指定配置和控制器的网络,执行模拟步骤,并将模拟重置为初始配置。
By inheriting Flow's base environment, a custom environment for varying control tasks can be created by adding the following functions to the child class:
通过继承Flow的基环境,可以通过在子类中添加以下函数来创建用于变化控制任务的自定义环境:
* **action_space**动作空间
* **observation_space**观测空间
* **apply_rl_actions**RL应用空间
* **get_state**获取状态
* **compute_reward**计算奖励值
Each of these components are covered in the next few subsections.
### 1.1 ADDITIONAL_ENV_PARAMS
The features used to parametrize components of the state/action space as well as the reward function are specified within the `EnvParams` input, as discussed in tutorial 1. Specifically, for the sake of our environment, the `additional_params` attribute within `EnvParams` will be responsible for storing information on the maximum possible accelerations and decelerations by the autonomous vehicles in the network. Accordingly, for this problem, we define an `ADDITIONAL_ENV_PARAMS` variable of the form:
用于参数化状态/动作空间组件的特性以及奖励功能在“EnvParams”输入中指定,如教程1中所述。具体来说,为了保护我们的环境,‘EnvParams’中的‘additional_params’属性将负责存储网络中自动驾驶车辆最大可能的加速和减速信息。因此,对于这个问题,我们定义了表单的‘ADDITIONAL_ENV_PARAMS’变量:
```
ADDITIONAL_ENV_PARAMS = {
"max_accel": 1,
"max_decel": 1,
}
```
All environments presented in Flow provide a unique `ADDITIONAL_ENV_PARAMS` component containing the information needed to properly define some environment-specific parameters. We assume that these values are always provided by the user, and accordingly can be called from `env_params`. For example, if we would like to call the "max_accel" parameter, we simply type:
Flow中提供的所有环境都提供了一个惟一的‘ADDITIONAL_ENV_PARAMS’组件,其中包含正确定义某些特定于环境的参数所需的信息。我们假设这些值总是由用户提供的,因此可以从' env_params '中调用。例如,如果我们想调用“max_accel”参数,我们只需输入:
max_accel = env_params.additional_params["max_accel"]
### 1.2 action_space 动作空间
The `action_space` method defines the number and bounds of the actions provided by the RL agent. In order to define these bounds with an OpenAI gym setting, we use several objects located within *gym.spaces*. For instance, the `Box` object is used to define a bounded array of values in $\mathbb{R}^n$.
“action_space”方法定义了RL代理提供的操作的数量和界限。为了定义OpenAI健身房设置的这些边界,我们使用了位于*gym.spaces*内的几个对象。例如,“Box”对象用于定义$\mathbb{R}^n$中的有界值数组。
```
from gym.spaces.box import Box
```
In addition, `Tuple` objects (not used by this tutorial) allow users to combine multiple `Box` elements together.
此外,“Tuple”对象(本教程中没有使用)允许用户将多个“Box”元素组合在一起。
```
from gym.spaces import Tuple
```
Once we have imported the above objects, we are ready to define the bounds of our action space. Given that our actions consist of a list of n real numbers (where n is the number of autonomous vehicles) bounded from above and below by "max_accel" and "max_decel" respectively (see section 1.1), we can define our action space as follows:
一旦导入了上述对象,就可以定义操作空间的边界了。假设我们的动作是由n个实数组成的列表(其中n是自动驾驶车辆的数量),从上到下分别由“max_accel”和“max_decel”约束(参见1.1节),我们可以这样定义我们的动作空间:
```
class myEnv(myEnv):
@property
def action_space(self):
num_actions = self.initial_vehicles.num_rl_vehicles
accel_ub = self.env_params.additional_params["max_accel"]
accel_lb = - abs(self.env_params.additional_params["max_decel"])
return Box(low=accel_lb,
high=accel_ub,
shape=(num_actions,))
```
### 1.3 observation_space 观察空间
The observation space of an environment represents the number and types of observations that are provided to the reinforcement learning agent. For this example, we will be observe two values for each vehicle: its position and speed. Accordingly, we need a observation space that is twice the size of the number of vehicles in the network.
环境的观察空间表示提供给强化学习代理的观察的数量和类型。对于本例,我们将观察每个车辆的两个值:位置和速度。因此,我们需要的观测空间是网络中车辆数量的两倍。
```
class myEnv(myEnv): # update my environment class
@property
def observation_space(self):
return Box(
low=0,
high=float("inf"),
shape=(2*self.initial_vehicles.num_vehicles,),
)
```
### 1.4 apply_rl_actions 应用Rl动作
The function `apply_rl_actions` is responsible for transforming commands specified by the RL agent into actual actions performed within the simulator. The vehicle kernel within the environment class contains several helper methods that may be of used to facilitate this process. These functions include:
函数' apply_rl_actions '负责将RL代理指定的命令转换为在模拟器中执行的实际操作。environment类中的vehicle内核包含几个辅助方法,可以用来促进这个过程。这些功能包括:
* **apply_acceleration** (list of str, list of float) -> None: converts an action, or a list of actions, into accelerations to the specified vehicles (in simulation)
* **apply_lane_change** (list of str, list of {-1, 0, 1}) -> None: converts an action, or a list of actions, into lane change directions for the specified vehicles (in simulation)
* **choose_route** (list of str, list of list of str) -> None: converts an action, or a list of actions, into rerouting commands for the specified vehicles (in simulation)
For our example we consider a situation where the RL agent can only specify accelerations for the RL vehicles; accordingly, the actuation method for the RL agent is defined as follows:
在我们的例子中,我们考虑这样一种情况:RL代理只能为RL车辆指定加速;因此,RL agent的驱动方法定义如下:
```
class myEnv(myEnv): # update my environment class
def _apply_rl_actions(self, rl_actions):
# the names of all autonomous (RL) vehicles in the network
rl_ids = self.k.vehicle.get_rl_ids()
# use the base environment method to convert actions into accelerations for the rl vehicles
self.k.vehicle.apply_acceleration(rl_ids, rl_actions)
```
### 1.5 get_state 获取状态
The `get_state` method extracts features from within the environments and provides then as inputs to the policy provided by the RL agent. Several helper methods exist within flow to help facilitate this process. Some useful helper method can be accessed from the following objects:
“get_state”方法从环境中提取特性,然后作为RL代理提供的策略的输入。flow中存在几个帮助方法来帮助简化这个过程。一些有用的帮助方法可以从以下对象访问:
* **self.k.vehicle**: provides current state information for all vehicles within the network为网络中的所有车辆提供当前状态信息
* **self.k.traffic_light**: provides state information on the traffic lights提供交通信号灯的状态信息
* **self.k.network**: information on the network, which unlike the vehicles and traffic lights is static网络上的信息,这与车辆和红绿灯是静态的
* More accessor objects and methods can be found within the Flow documentation at: http://berkeleyflow.readthedocs.io/en/latest/
In order to model global observability within the network, our state space consists of the speeds and positions of all vehicles (as mentioned in section 1.3). This is implemented as follows:
为了在网络中建立全局可观测性模型,我们的状态空间由所有车辆的速度和位置组成(如第1.3节所述)。实施办法如下:
```
import numpy as np
class myEnv(myEnv): # update my environment class
def get_state(self, **kwargs):
# the get_ids() method is used to get the names of all vehicles in the network
ids = self.k.vehicle.get_ids()
# we use the get_absolute_position method to get the positions of all vehicles
pos = [self.k.vehicle.get_x_by_id(veh_id) for veh_id in ids]
# we use the get_speed method to get the velocities of all vehicles
vel = [self.k.vehicle.get_speed(veh_id) for veh_id in ids]
# the speeds and positions are concatenated to produce the state
return np.concatenate((pos, vel))
```
### 1.6 compute_reward 计算奖励值
The `compute_reward` method returns the reward associated with any given state. These value may encompass returns from values within the state space (defined in section 1.5) or may contain information provided by the environment but not immediately available within the state, as is the case in partially observable tasks (or POMDPs).
' compute_reward '方法返回与任何给定状态相关联的奖励。这些值可能包含状态空间(在第1.5节中定义)中的值的返回,或者可能包含环境提供的信息,但是不能立即在状态中使用,就像部分可观察任务(或POMDPs)中的情况一样。
For this tutorial, we choose the reward function to be the average speed of all vehicles currently in the network. In order to extract this information from the environment, we use the `get_speed` method within the Vehicle kernel class to collect the current speed of all vehicles in the network, and return the average of these speeds as the reward. This is done as follows:
在本教程中,我们选择奖励函数作为当前网络中所有车辆的平均速度。为了从环境中提取这些信息,我们在车辆内核类中使用' get_speed '方法来收集网络中所有车辆的当前速度,并返回这些速度的平均值作为奖励。具体做法如下:
```
import numpy as np
class myEnv(myEnv): # update my environment class
def compute_reward(self, rl_actions, **kwargs):
# the get_ids() method is used to get the names of all vehicles in the network
ids = self.k.vehicle.get_ids()
# we next get a list of the speeds of all vehicles in the network
speeds = self.k.vehicle.get_speed(ids)
# finally, we return the average of all these speeds as the reward
return np.mean(speeds)
```
## 2. Testing the New Environment 测试新环境
### 2.1 Testing in Simulation
Now that we have successfully created our new environment, we are ready to test this environment in simulation. We begin by running this environment in a non-RL based simulation. The return provided at the end of the simulation is indicative of the cumulative expected reward when jam-like behavior exists within the netowrk.
现在我们已经成功地创建了新的环境,我们准备在模拟中测试这个环境。我们首先在一个非基于rl的模拟中运行这个环境。在模拟结束时提供的回报指示了在netowrk中存在类似于jam的行为时累积的预期回报。
```
from flow.controllers import IDMController, ContinuousRouter
from flow.core.experiment import Experiment
from flow.core.params import SumoParams, EnvParams, \
InitialConfig, NetParams
from flow.core.params import VehicleParams
from flow.networks.ring import RingNetwork, ADDITIONAL_NET_PARAMS
sim_params = SumoParams(sim_step=0.1, render=True)
vehicles = VehicleParams()
vehicles.add(veh_id="idm",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
net_params = NetParams(additional_params=additional_net_params)
initial_config = InitialConfig(bunching=20)
flow_params = dict(
exp_tag='ring',
env_name=myEnv, # using my new environment for the simulation
network=RingNetwork,
simulator='traci',
sim=sim_params,
env=env_params,
net=net_params,
veh=vehicles,
initial=initial_config,
)
# number of time steps
flow_params['env'].horizon = 1500
exp = Experiment(flow_params)
# run the sumo simulation
_ = exp.run(1)
```
### 2.2 Training the New Environment 培训新环境
Next, we wish to train this environment in the presence of the autonomous vehicle agent to reduce the formation of waves in the network, thereby pushing the performance of vehicles in the network past the above expected return.
接下来,我们希望在自主车辆代理存在的情况下训练这种环境,以减少网络中波浪的形成,从而使网络中车辆的性能超过上述预期收益。
The below code block may be used to train the above environment using the Proximal Policy Optimization (PPO) algorithm provided by RLlib. In order to register the environment with OpenAI gym, the environment must first be placed in a separate ".py" file and then imported via the script below. Then, the script immediately below should function regularly.
下面的代码块可以使用RLlib提供的Proximal Policy Optimization (PPO)算法来训练上述环境。为了注册OpenAI健身房的环境,环境必须首先放在一个单独的。py”。然后通过下面的脚本导入。然后,下面的脚本应该正常工作。
```
#############################################################
####### Replace this with the environment you created #######
#############################################################
from flow.envs import AccelEnv as myEnv
```
**Note**: We do not recommend training this environment to completion within a jupyter notebook setting; however, once training is complete, visualization of the resulting policy should show that the autonomous vehicle learns to dissipate the formation and propagation of waves in the network.
**注**:我们不建议在这种环境下进行的培训是在木星笔记本设置中完成的;然而,一旦训练完成,结果策略的可视化应该表明,自主车辆学会了在网络中消散波的形成和传播。
```
import json
import ray
from ray.rllib.agents.registry import get_agent_class
from ray.tune import run_experiments
from ray.tune.registry import register_env
from flow.networks.ring import RingNetwork, ADDITIONAL_NET_PARAMS
from flow.utils.registry import make_create_env
from flow.utils.rllib import FlowParamsEncoder
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams
from flow.core.params import VehicleParams, SumoCarFollowingParams
from flow.controllers import RLController, IDMController, ContinuousRouter
# time horizon of a single rollout
HORIZON = 1500
# number of rollouts per training iteration
N_ROLLOUTS = 20
# number of parallel workers
N_CPUS = 2
# We place one autonomous vehicle and 22 human-driven vehicles in the network
vehicles = VehicleParams()
vehicles.add(
veh_id="human",
acceleration_controller=(IDMController, {
"noise": 0.2
}),
car_following_params=SumoCarFollowingParams(
min_gap=0
),
routing_controller=(ContinuousRouter, {}),
num_vehicles=21)
vehicles.add(
veh_id="rl",
acceleration_controller=(RLController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=1)
flow_params = dict(
# name of the experiment
exp_tag="stabilizing_the_ring",
# name of the flow environment the experiment is running on
env_name=myEnv, # <------ here we replace the environment with our new environment
# name of the network class the experiment is running on
network=RingNetwork,
# simulator that is used by the experiment
simulator='traci',
# sumo-related parameters (see flow.core.params.SumoParams)
sim=SumoParams(
sim_step=0.1,
render=True,
),
# environment related parameters (see flow.core.params.EnvParams)
env=EnvParams(
horizon=HORIZON,
warmup_steps=750,
clip_actions=False,
additional_params={
"target_velocity": 20,
"sort_vehicles": False,
"max_accel": 1,
"max_decel": 1,
},
),
# network-related parameters (see flow.core.params.NetParams and the
# network's documentation or ADDITIONAL_NET_PARAMS component)
net=NetParams(
additional_params=ADDITIONAL_NET_PARAMS.copy()
),
# vehicles to be placed in the network at the start of a rollout (see
# flow.core.params.VehicleParams)
veh=vehicles,
# parameters specifying the positioning of vehicles upon initialization/
# reset (see flow.core.params.InitialConfig)
initial=InitialConfig(
bunching=20,
),
)
def setup_exps():
"""Return the relevant components of an RLlib experiment.
Returns
-------
str
name of the training algorithm
str
name of the gym environment to be trained
dict
training configuration parameters
"""
alg_run = "PPO"
agent_cls = get_agent_class(alg_run)
config = agent_cls._default_config.copy()
config["num_workers"] = N_CPUS
config["train_batch_size"] = HORIZON * N_ROLLOUTS
config["gamma"] = 0.999 # discount rate
config["model"].update({"fcnet_hiddens": [3, 3]})
config["use_gae"] = True
config["lambda"] = 0.97
config["kl_target"] = 0.02
config["num_sgd_iter"] = 10
config['clip_actions'] = False # FIXME(ev) temporary ray bug
config["horizon"] = HORIZON
# save the flow params for replay
flow_json = json.dumps(
flow_params, cls=FlowParamsEncoder, sort_keys=True, indent=4)
config['env_config']['flow_params'] = flow_json
config['env_config']['run'] = alg_run
create_env, gym_name = make_create_env(params=flow_params, version=0)
# Register as rllib env
register_env(gym_name, create_env)
return alg_run, gym_name, config
alg_run, gym_name, config = setup_exps()
ray.init(num_cpus=N_CPUS + 1)
trials = run_experiments({
flow_params["exp_tag"]: {
"run": alg_run,
"env": gym_name,
"config": {
**config
},
"checkpoint_freq": 20,
"checkpoint_at_end": True,
"max_failures": 999,
"stop": {
"training_iteration": 200,
},
}
})
```
|
github_jupyter
|
# Introdcution
This trial describes how to create edge and screw dislocations in iron BCC strating with one unitcell containing two atoms
## Background
The elastic solution for displacement field of dislocations is provided in the paper [Dislocation Displacement Fields in Anisotropic Media](https://doi.org/10.1063/1.1657954).
## Theoritical
The [paper](https://doi.org/10.1063/1.1657954) mentioned in backgroud subsection deals with only one dislocation. Here we describe how to extend the solution to periodic array of dislocations. Since we are dealing with linear elasticity we can superpose (sum up) the displacement field of all the individual dislocations. Looking at the Eqs. (2-8) of abovementioned reference this boils done to finding a closed form soloution for
$$\sum_{m=-\infty}^{\infty} \log\left(z-ma \right).$$
Where $z= x+yi$ and $a$ is a real number, equivakent to $\mathbf{H}_{00}$ that defines the periodicity of dislocations on x direction.
Let us simplify the problem a bit further. Since this is the component displacement field we can add or subtract constant term so for each $\log\left(z-ma \right)$ we subtract a factor of $log\left(a \right)$, leading to
$$\sum_{m=-\infty}^{\infty} \log\left(\frac{z}{a}-m \right).$$
Lets change $z/a$ to $z$ and when we arrive the solution we will change ot back
$$\sum_{m=-\infty}^{\infty} \log\left(z-m \right).$$
Objective is to find a closed form solution for
$$f\left(z\right)=\sum_{m=-\infty}^{\infty} \log\left(z-m \right).$$
First note that
$$
f'\left(z\right)=\frac{1}{z}+\sum_{m=1}^{\infty}\frac{1}{z-m}+\frac{1}{z+m},
$$
and also
$$
\frac{1}{z\mp m}=\mp \frac{1}{m}\sum_{n=0}^{\infty}
\left(\pm \frac{z}{m}\right)^n.
$$
This leads to
$$
\frac{1}{z-m}+\frac{1}{z+m}=-\frac{2}{z}\sum_{n=1}^{\infty}\left(\frac{z}{m}\right)^{2n},
$$
and subsequently
$$
f'\left(z\right)=\frac{1}{z}-\frac{2}{z}\sum_{n=1}^{\infty}\left(z\right)^{2n}\sum_{m=1}^{\infty}m^{-2n},
$$
$$
=\frac{1}{z}-\frac{2}{z}\sum_{n=1}^{\infty}\left(z\right)^{2n}\zeta\left(2n\right).
$$
Where $\zeta$ is Riemann zeta function. Since $\zeta\left(0\right)=-1/2$, it simplifies to:
$$
f'\left(z\right)=-\frac{2}{z}\sum_{n=0}^{\infty}\left(z\right)^{2n}\zeta\left(2n\right)
$$
Note that
$$
-\frac{\pi z\cot\left(\pi z\right)}{2}=\sum_{n=0}^{\infty}z^{2n} \zeta\left(2n\right)
$$
I have no idea how I figured this out but it is true. Therefore,
$$
f'\left(z\right)=\pi\cot\left(\pi z\right).
$$
At this point one can naively assume that the problem is solved (like I did) and the answer is something like:
$$
f\left(z\right)=\log\left[\sin\left(\pi z\right)\right]+C,
$$
Where $C$ is a constant. However, after checking this against numerical vlaues you will see that this is completely wrong.
The issue here is that startegy was wrong at the very begining. The sum of the displacelment of infinte dislocations will not converge since we have infinite discountinuity in displacement field. In other words they do not cancel each other they feed each other.
But there is still a way to salvage this. Luckily, displacement is relative quantity and we are dealing with crystals. We can easily add a discontinuity in form an integer number burger vectors to a displacement field and nothing will be affected.
So here is the trick: We will focus only on the displacement field of one unit cell dislocation (number 0). At each iteration we add two dislocation to its left and right.
At $n$th iterations we add a discontinuity of the form
$$
-\mathrm{Sign}\left[\mathrm{Im}\left(z\right)\right] \pi i
$$
and a constant of the form:
$$
-2\log n.
$$
In other words and we need to evaluate:
$$
\lim_{m\to\infty}\sum_{n=-m}^{m}
\biggl\{
\log\left(z-n\right)
-\mathrm{Sign}\left[\mathrm{Im}\left(z\right)\right] \pi i
-2\log\left(n \right)
\biggr\} + \pi,
$$
which simplifies to
$$
\lim_{m\to\infty}\sum_{n=-m}^{m}\log\left(z-n\right)
-\mathrm{Sign}\left[\mathrm{Im}\left(z\right)\right] m \pi i
-2\log\left(\frac{m\!!}{\sqrt{\pi}} \right)
$$
Note that we added an extra $\pi$ to displacement field for aesthetic reasons. After a lot of manipulations and tricks (meaning I dont't remember how I got here) we arrive at the following relation:
$$
\lim_{m\to\infty}\sum_{n=-m}^{m}\log\left(z-n\right)
-\mathrm{Sign}\left[\mathrm{Im}\left(z\right)\right] m \pi i
-2\log\left(\frac{m\!!}{\sqrt{\pi}} \right)=\log\left[\sin\left(\pi z\right)\right]
$$
However, this is only valid when
$$-1/2 \le\mathrm{Re}\left(z\right)\lt 1/2.$$
If one exceeds this domain the answer is:
$$
\boxed{
\log\left[\sin\left(\pi z\right)\right]-\mathrm{Sign}\left[\mathrm{Im}\left(z\right)\right]\left \lceil{\mathrm{Re}\left(\frac{z}{2}\right)}-\frac{3}{4}\right \rceil 2 \pi i
}
$$
Where $\lceil . \rceil$ is the cieling function. Of course there is probably a nicer form. Feel free to derive it
## Final formulation
To account for peridicity of dislocations in $x$ direction, the expression $\log\left(z\right)$ in Eqs(2-7) of the [paper](https://doi.org/10.1063/1.1657954), it should be replaced by:
$$\lim_{m\to\infty}\sum_{n=-m}^{m}\log\left(z-na\right)
-\mathrm{Sign}\left[\mathrm{Im}\left(z\right)\right] m \pi i
-2\log\left(\frac{m\,\,\!!}{\sqrt{\pi}} \right),$$
which has the closed form:
$$
\boxed{
\log\left[\sin\left(\pi\frac{z}{a}\right)\right]-\mathrm{Sign}\left[\mathrm{Im}\left(\frac{z}{a}\right)\right]\left \lceil{\mathrm{Re}\left(\frac{z}{2a}\right)}-\frac{3}{4}\right \rceil 2 \pi i.
}
$$
# Preperation
## Import packages
```
import numpy as np
import matplotlib.pyplot as plt
import mapp4py
from mapp4py import md
from lib.elasticity import rot, cubic, resize, displace, HirthEdge, HirthScrew
```
## Block the output of all cores except for one
```
from mapp4py import mpi
if mpi().rank!=0:
with open(os.devnull, 'w') as f:
sys.stdout = f;
```
## Define an `md.export_cfg` object
`md.export_cfg` has a call method that we can use to create quick snapshots of our simulation box
```
xprt = md.export_cfg("");
```
# Screw dislocation
```
sim=md.atoms.import_cfg('configs/Fe_300K.cfg');
nlyrs_fxd=2
a=sim.H[0][0];
b_norm=0.5*a*np.sqrt(3.0);
b=np.array([1.0,1.0,1.0])
s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0)
```
## Create a $\langle110\rangle\times\langle112\rangle\times\frac{1}{2}\langle111\rangle$ cell
### create a $\langle110\rangle\times\langle112\rangle\times\langle111\rangle$ cell
Since `mapp4py.md.atoms.cell_chenge()` only accepts integer values start by creating a $\langle110\rangle\times\langle112\rangle\times\langle111\rangle$ cell
```
sim.cell_change([[1,-1,0],[1,1,-2],[1,1,1]])
```
### Remove half of the atoms and readjust the position of remaining
Now one needs to cut the cell in half in $[111]$ direction. We can achive this in three steps:
1. Remove the atoms that are above located above $\frac{1}{2}[111]$
2. Double the position of the remiaing atoms in the said direction
3. Shrink the box affinly to half on that direction
```
H=np.array(sim.H);
def _(x):
if x[2] > 0.5*H[2, 2] - 1.0e-8:
return False;
else:
x[2]*=2.0;
sim.do(_);
_ = np.full((3,3), 0.0)
_[2, 2] = - 0.5
sim.strain(_)
```
### Readjust the postions
```
displace(sim,np.array([sim.H[0][0]/6.0,sim.H[1][1]/6.0,0.0]))
```
## Replicating the unit cell
```
max_natms=100000
H=np.array(sim.H);
n_per_area=sim.natms/(H[0,0] * H[1,1]);
_ =np.sqrt(max_natms/n_per_area);
N0 = np.array([
np.around(_ / sim.H[0][0]),
np.around(_ / sim.H[1][1]),
1], dtype=np.int32)
sim *= N0;
H = np.array(sim.H);
H_new = np.array(sim.H);
H_new[1][1] += 50.0
resize(sim, H_new, np.full((3),0.5) @ H)
C_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241);
Q=np.array([np.cross(s,b)/np.linalg.norm(np.cross(s,b)),s/np.linalg.norm(s),b/np.linalg.norm(b)])
hirth = HirthScrew(rot(C_Fe,Q), rot(b*0.5*a,Q))
ctr = np.full((3),0.5) @ H_new;
s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1])
def _(x,x_d,x_dof):
sy=(x[1]-ctr[1])/H[1, 1];
x0=(x-ctr)/H[0, 0];
if sy>s_fxd or sy<=-s_fxd:
x_dof[1]=x_dof[2]=False;
x+=b_norm*hirth.ave_disp(x0)
else:
x+=b_norm*hirth.disp(x0)
sim.do(_)
H = np.array(sim.H);
H_inv = np.array(sim.B);
H_new = np.array(sim.H);
H_new[0,0]=np.sqrt(H[0,0]**2+(0.5*b_norm)**2)
H_new[2,0]=H[2,2]*0.5*b_norm/H_new[0,0]
H_new[2,2]=np.sqrt(H[2,2]**2-H_new[2,0]**2)
F = np.transpose(H_inv @ H_new);
sim.strain(F - np.identity(3))
xprt(sim, "dumps/screw.cfg")
```
## putting it all together
```
def make_scrw(nlyrs_fxd,nlyrs_vel,vel):
#this is for 0K
#c_Fe=cubic(1.5187249951755375,0.9053185628093443,0.7249256807942608);
#this is for 300K
c_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241);
#N0=np.array([80,46,5],dtype=np.int32)
sim=md.atoms.import_cfg('configs/Fe_300K.cfg');
a=sim.H[0][0];
b_norm=0.5*a*np.sqrt(3.0);
b=np.array([1.0,1.0,1.0])
s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0)
Q=np.array([np.cross(s,b)/np.linalg.norm(np.cross(s,b)),s/np.linalg.norm(s),b/np.linalg.norm(b)])
c0=rot(c_Fe,Q)
hirth = HirthScrew(rot(c_Fe,Q),np.dot(Q,b)*0.5*a)
sim.cell_change([[1,-1,0],[1,1,-2],[1,1,1]])
displace(sim,np.array([sim.H[0][0]/6.0,sim.H[1][1]/6.0,0.0]))
max_natms=1000000
n_per_vol=sim.natms/sim.vol;
_=np.power(max_natms/n_per_vol,1.0/3.0);
N1=np.full((3),0,dtype=np.int32);
for i in range(0,3):
N1[i]=int(np.around(_/sim.H[i][i]));
N0=np.array([N1[0],N1[1],1],dtype=np.int32);
sim*=N0;
sim.kB=8.617330350e-5
sim.create_temp(300.0,8569643);
H=np.array(sim.H);
H_new=np.array(sim.H);
H_new[1][1]+=50.0
resize(sim, H_new, np.full((3),0.5) @ H)
ctr=np.dot(np.full((3),0.5),H_new);
s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1])
s_vel=0.5-0.5*float(nlyrs_vel)/float(N0[1])
def _(x,x_d,x_dof):
sy=(x[1]-ctr[1])/H[1][1];
x0=(x-ctr)/H[0][0];
if sy>s_fxd or sy<=-s_fxd:
x_d[1]=0.0;
x_dof[1]=x_dof[2]=False;
x+=b_norm*hirth.ave_disp(x0)
else:
x+=b_norm*hirth.disp(x0)
if sy<=-s_vel or sy>s_vel:
x_d[2]=2.0*sy*vel;
sim.do(_)
H = np.array(sim.H);
H_inv = np.array(sim.B);
H_new = np.array(sim.H);
H_new[0,0]=np.sqrt(H[0,0]**2+(0.5*b_norm)**2)
H_new[2,0]=H[2,2]*0.5*b_norm/H_new[0,0]
H_new[2,2]=np.sqrt(H[2,2]**2-H_new[2,0]**2)
F = np.transpose(H_inv @ H_new);
sim.strain(F - np.identity(3))
return N1[2],sim;
```
# Edge dislocation
```
sim=md.atoms.import_cfg('configs/Fe_300K.cfg');
nlyrs_fxd=2
a=sim.H[0][0];
b_norm=0.5*a*np.sqrt(3.0);
b=np.array([1.0,1.0,1.0])
s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0)
sim.cell_change([[1,1,1],[1,-1,0],[1,1,-2]])
H=np.array(sim.H);
def _(x):
if x[0] > 0.5*H[0, 0] - 1.0e-8:
return False;
else:
x[0]*=2.0;
sim.do(_);
_ = np.full((3,3), 0.0)
_[0,0] = - 0.5
sim.strain(_)
displace(sim,np.array([0.0,sim.H[1][1]/4.0,0.0]))
max_natms=100000
H=np.array(sim.H);
n_per_area=sim.natms/(H[0, 0] * H[1, 1]);
_ =np.sqrt(max_natms/n_per_area);
N0 = np.array([
np.around(_ / sim.H[0, 0]),
np.around(_ / sim.H[1, 1]),
1], dtype=np.int32)
sim *= N0;
# remove one layer along ... direction
H=np.array(sim.H);
frac=H[0,0] /N0[0]
def _(x):
if x[0] < H[0, 0] /N0[0] and x[1] >0.5*H[1, 1]:
return False;
sim.do(_)
H = np.array(sim.H);
H_new = np.array(sim.H);
H_new[1][1] += 50.0
resize(sim, H_new, np.full((3),0.5) @ H)
C_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241);
_ = np.cross(b,s)
Q = np.array([b/np.linalg.norm(b), s/np.linalg.norm(s), _/np.linalg.norm(_)])
hirth = HirthEdge(rot(C_Fe,Q), rot(b*0.5*a,Q))
_ = (1.0+0.5*(N0[0]-1.0))/N0[0];
ctr = np.array([_,0.5,0.5]) @ H_new;
frac = H[0][0]/N0[0]
s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1])
def _(x,x_d,x_dof):
sy=(x[1]-ctr[1])/H[1, 1];
x0=(x-ctr);
if(x0[1]>0.0):
x0/=(H[0, 0]-frac)
else:
x0/= H[0, 0]
if sy>s_fxd or sy<=-s_fxd:
x+=b_norm*hirth.ave_disp(x0);
x_dof[0]=x_dof[1]=False;
else:
x+=b_norm*hirth.disp(x0);
x[0]-=0.25*b_norm;
sim.do(_)
H = np.array(sim.H)
H_new = np.array(sim.H);
H_new[0, 0] -= 0.5*b_norm;
resize(sim, H_new, np.full((3),0.5) @ H)
xprt(sim, "dumps/edge.cfg")
```
## putting it all together
```
def make_edge(nlyrs_fxd,nlyrs_vel,vel):
#this is for 0K
#c_Fe=cubic(1.5187249951755375,0.9053185628093443,0.7249256807942608);
#this is for 300K
c_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241);
#N0=np.array([80,46,5],dtype=np.int32)
sim=md.atoms.import_cfg('configs/Fe_300K.cfg');
a=sim.H[0][0];
b_norm=0.5*a*np.sqrt(3.0);
b=np.array([1.0,1.0,1.0])
s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0)
# create rotation matrix
_ = np.cross(b,s)
Q=np.array([b/np.linalg.norm(b), s/np.linalg.norm(s), _/np.linalg.norm(_)])
hirth = HirthEdge(rot(c_Fe,Q),np.dot(Q,b)*0.5*a)
# create a unit cell
sim.cell_change([[1,1,1],[1,-1,0],[1,1,-2]])
H=np.array(sim.H);
def f0(x):
if x[0]>0.5*H[0][0]-1.0e-8:
return False;
else:
x[0]*=2.0;
sim.do(f0);
_ = np.full((3,3), 0.0)
_[0,0] = - 0.5
sim.strain(_)
displace(sim,np.array([0.0,sim.H[1][1]/4.0,0.0]))
max_natms=1000000
n_per_vol=sim.natms/sim.vol;
_=np.power(max_natms/n_per_vol,1.0/3.0);
N1=np.full((3),0,dtype=np.int32);
for i in range(0,3):
N1[i]=int(np.around(_/sim.H[i][i]));
N0=np.array([N1[0],N1[1],1],dtype=np.int32);
N0[0]+=1;
sim*=N0;
# remove one layer along ... direction
H=np.array(sim.H);
frac=H[0][0]/N0[0]
def _(x):
if x[0] < H[0][0]/N0[0] and x[1]>0.5*H[1][1]:
return False;
sim.do(_)
sim.kB=8.617330350e-5
sim.create_temp(300.0,8569643);
H = np.array(sim.H);
H_new = np.array(sim.H);
H_new[1][1] += 50.0
ctr=np.dot(np.full((3),0.5),H);
resize(sim,H_new, np.full((3),0.5) @ H)
l=(1.0+0.5*(N0[0]-1.0))/N0[0];
ctr=np.dot(np.array([l,0.5,0.5]),H_new);
frac=H[0][0]/N0[0]
s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1])
s_vel=0.5-0.5*float(nlyrs_vel)/float(N0[1])
def f(x,x_d,x_dof):
sy=(x[1]-ctr[1])/H[1][1];
x0=(x-ctr);
if(x0[1]>0.0):
x0/=(H[0][0]-frac)
else:
x0/= H[0][0]
if sy>s_fxd or sy<=-s_fxd:
x_d[1]=0.0;
x_dof[0]=x_dof[1]=False;
x+=b_norm*hirth.ave_disp(x0);
else:
x+=b_norm*hirth.disp(x0);
if sy<=-s_vel or sy>s_vel:
x_d[0]=2.0*sy*vel;
x[0]-=0.25*b_norm;
sim.do(f)
H = np.array(sim.H)
H_new = np.array(sim.H);
H_new[0, 0] -= 0.5*b_norm;
resize(sim, H_new, np.full((3),0.5) @ H)
return N1[2], sim;
nlyrs_fxd=2
nlyrs_vel=7;
vel=-0.004;
N,sim=make_edge(nlyrs_fxd,nlyrs_vel,vel)
xprt(sim, "dumps/edge.cfg")
_ = np.array([[-1,1,0],[1,1,1],[1,1,-2]], dtype=np.float);
Q = np.linalg.inv(np.sqrt(_ @ _.T)) @ _;
C = rot(cubic(1.3967587463636366,0.787341583191591,0.609615090769241),Q)
B = np.linalg.inv(
np.array([
[C[0, 0, 0, 0], C[0, 0, 1, 1], C[0, 0, 0, 1]],
[C[0, 0, 1, 1], C[1, 1, 1, 1], C[1, 1, 0, 1]],
[C[0, 0, 0, 1], C[1, 1, 0, 1], C[0, 1, 0, 1]]
]
))
_ = np.roots([B[0, 0], -2.0*B[0, 2],2.0*B[0, 1]+B[2, 2], -2.0*B[1, 2], B[1, 1]])
mu = np.array([_[0],0.0]);
if np.absolute(np.conjugate(mu[0]) - _[1]) > 1.0e-12:
mu[1] = _[1];
else:
mu[1] = _[2]
alpha = np.real(mu);
beta = np.imag(mu);
p = B[0,0] * mu**2 - B[0,2] * mu + B[0, 1]
q = B[0,1] * mu - B[0, 2] + B[1, 1]/ mu
K = np.stack([p, q]) * np.array(mu[1], mu[0]) /(mu[1] - mu[0])
K_r = np.real(K)
K_i = np.imag(K)
Tr = np.stack([
np.array(np.array([[1.0, alpha[0]], [0.0, beta[0]]])),
np.array([[1.0, alpha[1]], [0.0, beta[1]]])
], axis=1)
def u_f0(x): return np.sqrt(np.sqrt(x[0] * x[0] + x[1] * x[1]) + x[0])
def u_f1(x): return np.sqrt(np.sqrt(x[0] * x[0] + x[1] * x[1]) - x[0]) * np.sign(x[1])
def disp(x):
_ = Tr @ x
return K_r @ u_f0(_) + K_i @ u_f1(_)
```
## Putting it all together
```
_ = np.array([[-1,1,0],[1,1,1],[1,1,-2]], dtype=np.float);
Q = np.linalg.inv(np.sqrt(_ @ _.T)) @ _;
C = rot(cubic(1.3967587463636366,0.787341583191591,0.609615090769241),Q)
disp = crack(C)
n = 300;
r = 10;
disp_scale = 0.3;
n0 = int(np.round(n/ (1 +np.pi), ))
n1 = n - n0
xs = np.concatenate((
np.stack([np.linspace(0, -r , n0), np.full((n0,), -1.e-8)]),
r * np.stack([np.cos(np.linspace(-np.pi, np.pi , n1)),np.sin(np.linspace(-np.pi, np.pi , n1))]),
np.stack([np.linspace(-r, 0 , n0), np.full((n0,), 1.e-8)]),
), axis =1)
xs_def = xs + disp_scale * disp(xs)
fig, ax = plt.subplots(figsize=(10.5,5), ncols = 2)
ax[0].plot(xs[0], xs[1], "b-", label="non-deformed");
ax[1].plot(xs_def[0], xs_def[1], "r-.", label="deformed");
```
|
github_jupyter
|
# Managing pins
```
%load_ext autoreload
%autoreload 2
import qiskit_metal as metal
from qiskit_metal import designs, draw
from qiskit_metal import MetalGUI, Dict, Headings
Headings.h1('Welcome to Qiskit Metal')
design = designs.DesignPlanar()
gui = MetalGUI(design)
```
First we create some transmon pockets to have a number of pins generated for use.
```
from qiskit_metal.qlibrary.qubits.transmon_pocket import TransmonPocket
## Custom options for all the transmons
options = dict(
# Some options we want to modify from the deafults
# (see below for defaults)
pad_width = '425 um',
pocket_height = '650um',
# Adding 4 connectors (see below for defaults)
connection_pads=dict(
a = dict(loc_W=+1,loc_H=+1),
b = dict(loc_W=-1,loc_H=+1, pad_height='30um'),
c = dict(loc_W=+1,loc_H=-1, pad_width='200um'),
d = dict(loc_W=-1,loc_H=-1, pad_height='50um')
)
)
## Create 4 transmons
q1 = TransmonPocket(design, 'Q1', options = dict(
pos_x='+2.4mm', pos_y='+0.0mm', **options))
q2 = TransmonPocket(design, 'Q2', options = dict(
pos_x='+0.0mm', pos_y='-0.9mm', orientation = '90', **options))
q3 = TransmonPocket(design, 'Q3', options = dict(
pos_x='-2.4mm', pos_y='+0.0mm', **options))
q4 = TransmonPocket(design, 'Q4', options = dict(
pos_x='+0.0mm', pos_y='+0.9mm', orientation = '90', **options))
## Rebuild the design
gui.rebuild()
gui.autoscale()
```
Selecting the different components via the GUI shows the pins said component has. You can also see this via;
```
design.components.Q1.pins.keys()
```
Each pin contains a dictionary of information which can be used by other components or renderers.
```
design.components.Q1.pins.a
```
We can pass these pins into some components to auto generate connections, such as CPW lines.
```
from qiskit_metal.qlibrary.tlines.straight_path import RouteStraight
c1 = RouteStraight(design, 'c1', type="Route", options=dict(pin_inputs=dict(start_pin = dict(component = 'Q1',
pin = 'd'),
end_pin=dict(component = 'Q2',
pin = 'c'))))
gui.rebuild()
gui.autoscale()
```
The example CPW also automatically generates it's own pins based on the pin inputs it was given. This is to allow for such a component to not
be destroyed if the component it is attached to is deleted.
```
design.components.c1.pins
```
We can also see what active connections there are from the netlist. Pins that share the same net_id indicate they are connected. Pins that are not on the net list are currently open.
```
design.net_info
```
What happens if we try to pass in a component/pin combo that doesn't exist?
```
#A component that doesn't exist
c2 = RouteStraight(design, 'c2', type="Route", options=dict(pin_inputs = dict(start_pin = dict(component = 'NotReallyHere',
pin = 'd'),
end_pin =dict(component = 'Q2',
pin = 'a'))))
#A pin that doesn't exist
c3 = RouteStraight(design, 'c3', type="Route", options=dict(pin_inputs = dict(start_pin = dict(component = 'Q1',
pin = 'NotReallyHere'),
end_pin =dict(component = 'Q2',
pin = 'a'))))
```
Or if try to pass in a pin that is already connected.
```
c4 = RouteStraight(design, 'c4', type="Route", options=dict(pin_inputs = dict(start_pin = dict(component = 'Q1',
pin = 'b'),
end_pin =dict(component = 'Q2',
pin = 'c'))))
```
pin_inputs is the default dictionary for passing pins into a component, **BUT** how the dictionary is structured is component dependent. Using the above structure (eg. start_pin, end_pin) is suggested for any 2 port type connection, but you should always check the documentation for the specific component you are wanting to use.
```
Headings.h1('CPW Examples')
```
An example set showing some current functional CPW components, including both simple auto-routing and meandering
```
design.delete_all_components()
from qiskit_metal.qlibrary.terminations.open_to_ground import OpenToGround
from qiskit_metal.qlibrary.tlines.framed_path import RouteFramed
from qiskit_metal.qlibrary.tlines.straight_path import RouteStraight
from qiskit_metal.qlibrary.tlines.meandered import RouteMeander
open_start_straight = OpenToGround(design,'Open_straight_start',options=Dict(pos_x='0um',pos_y='0um',orientation = '-90'))
open_end_straight = OpenToGround(design,'Open_straight_end',options=Dict(pos_x='0um',pos_y='1500um',orientation = '90'))
open_start_auto = OpenToGround(design,'Open_auto_start',options=Dict(pos_x='250um',pos_y='0um',orientation = '-90'))
open_end_auto = OpenToGround(design,'Open_auto_end',options=Dict(pos_x='250um',pos_y='1500um',orientation = '0'))
open_start_meander = OpenToGround(design,'Open_meander_start',options=Dict(pos_x='1000um',pos_y='0um',orientation = '-90'))
open_end_meander = OpenToGround(design,'Open_meander_end',options=Dict(pos_x='1000um',pos_y='1500um',orientation = '90'))
testStraight = RouteStraight(design,'straightTest',options=Dict(pin_inputs=Dict(
start_pin=Dict(
component = 'Open_straight_start',
pin = 'open'),
end_pin=Dict(
component = 'Open_straight_end',
pin = 'open')
)))
testAuto = RouteFramed(design,'autoTest',options=Dict(pin_inputs=Dict(
start_pin=Dict(
component = 'Open_auto_start',
pin = 'open'),
end_pin=Dict(
component = 'Open_auto_end',
pin = 'open')
)))
testMeander = RouteMeander(design,'meanderTest',options=Dict(pin_inputs=Dict(
start_pin=Dict(
component = 'Open_meander_start',
pin = 'open'),
end_pin=Dict(
component = 'Open_meander_end',
pin = 'open')
)))
gui.rebuild()
gui.autoscale()
gui.screenshot()
```
|
github_jupyter
|
# Deep Convolutional Neural Networks
In this assignment, we will be using the Keras library to build, train, and evaluate some *relatively simple* Convolutional Neural Networks to demonstrate how adding layers to a network can improve accuracy, yet are more computationally expensive.
The purpose of this assignment is for you to demonstrate understanding of the appropriate structure of a convolutional neural network and to give you an opportunity to research any parameters or elements of CNNs that you don't fully understand.
We will be using the cifar100 dataset for this assignment, however, in order to keep the dataset size small enough to be trained in a reasonable amount of time in a Google Colab, we will only be looking at two classes from the dataset - cats and dogs.

```
# Import important libraries and methods
import matplotlib.pyplot as plt
import numpy as np
import keras
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Activation
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras import backend as K
if K.backend()=='tensorflow':
K.set_image_dim_ordering("th")
# input image dimensions
img_rows, img_cols = 32, 32
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
# Important Hyperparameters
batch_size = 32
num_classes = 2
epochs = 100
# Plot sample image from each cifar10 class.
class_names = ['airplane','automobile','bird','cat','deer','dog','frog','horse','shop','truck']
fig = plt.figure(figsize=(8,3))
for i in range(10):
ax = fig.add_subplot(2, 5, 1 + i, xticks=[], yticks=[])
idx = np.where(y_train[:]==i)[0]
features_idx = x_train[idx,::]
img_num = np.random.randint(features_idx.shape[0])
im = np.transpose(features_idx[img_num,::],(1,2,0))
ax.set_title(class_names[i])
plt.imshow(im)
plt.show()
# Only look at cats [=3] and dogs [=5]
train_picks = np.ravel(np.logical_or(y_train==3,y_train==5))
test_picks = np.ravel(np.logical_or(y_test==3,y_test==5))
y_train = np.array(y_train[train_picks]==5,dtype=int)
y_test = np.array(y_test[test_picks]==5,dtype=int)
x_train = x_train[train_picks]
x_test = x_test[test_picks]
# check for image_data format and format image shape accordingly
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 3, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 3, img_rows, img_cols)
input_shape = (3, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 3)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 3)
input_shape = (img_rows, img_cols, 3)
# Normalize pixel values between 0 and 1
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
# Convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(np.ravel(y_train), num_classes)
y_test = keras.utils.to_categorical(np.ravel(y_test), num_classes)
# Check train and test lengths
print('y_train length:', len(y_train))
print('x_train length:', len(x_train))
print('y_test length:', len(y_test))
print('x_test length:', len(x_test))
```
# Model #1
This model will be almost as simple as we can make it. It should look something like:
* Conv2D - kernel_size = (3,3)
* Relu Activation
* Conv2D - kernel_size = (3,3)
* Relu Activation
* Max Pooling - pool_size = (2,2)
* Dropout - use .25 for all layers but the final dropout layer
---
* Flatten
* Fully-Connected (Dense)
* Dropout - use .5 this time
* Fully-Connected (Dense layer where # neurons = # final classes/labels)
Then compile the model using categorical_crossentropy as your loss metric. Use the Adam optimizer, and accuracy as your overall scoring metric.
If you're lost when you get to this point, make sure you look at the lecture colab for somewhat similar sample code.
```
x_train.shape
model1 = Sequential()
model1.add(Conv2D(8, (3,3), activation='relu', input_shape=(3, 32, 32)))
model1.add(Dropout(.25))
model1.add(Conv2D(16, (3,3), activation='relu'))
model1.add(Dropout(.25))
model1.add(MaxPooling2D((2,2)))
model1.add(Flatten())
model1.add(Dense(64, activation='relu'))
model1.add(Dropout(0.5))
model1.add(Dense(2, activation='softmax'))
model1.summary()
model1.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
```
## Fit your model
Fit your model and save it to a new variable so that we can access the .history value to make a plot of our training and validation accuracies by epoch.
```
model1_training = model1.fit(x_train, y_train, epochs=50, batch_size=128, validation_split=0.1)
```
## Plot Training and Validation Accuracies
Use your matplotlib skills to give us a nice line graph of both training and validation accuracies as the number of epochs increases. Don't forget your legend, axis and plot title.
```
def train_val_metrics(epochs, model_training):
epochs = range(1, epochs+1)
metrics = model_training.history
train_loss = metrics['loss']
train_acc = metrics['acc']
val_loss = metrics['val_loss']
val_acc = metrics['val_acc']
ax = plt.subplot(211)
train, = ax.plot(epochs, train_loss)
val, = ax.plot(epochs, val_loss)
ax.legend([train, val], ['training', 'validation'])
ax.set(xlabel='epochs', ylabel='categorical cross-entropy loss')
ax2 = plt.subplot(212)
train2, = ax2.plot(epochs, train_acc)
val2, = ax2.plot(epochs, val_acc)
ax2.legend([train2, val2], ['training', 'validation'])
ax2.set(xlabel='epochs', ylabel='accuracy')
train_val_metrics(50, model1_training)
```
The model begins to overfit around epoch 20 or so. Early stopping would be useful here.

# Model #2
Lets add an additional set of convolutional->activation->pooling to this model:
* Conv2D - kernel_size = (3,3)
* Relu Activation
* Conv2D - kernel_size = (3,3)
* Relu Activation
* Max Pooling - pool_size = (2,2)
* Dropout - use .25 for all layers but the final layer
---
* Conv2D - kernel_size = (3,3)
* Relu Activation
* Conv2D - kernel_size = (3,3)
* Relu Activation
* Max Pooling - pool_size = (2,2)
* Dropout - use .25 for all layers but the final layer
---
* Flatten
* Fully-Connected (Dense)
* Dropout - use .5 this time
* Fully-Connected (Dense layer where # neurons = # final classes/labels)
Again, compile the model using categorical_crossentropy as your loss metric and use the Adam optimizer, and accuracy as your overall scoring metric.
```
model2 = Sequential()
model2.add(Conv2D(8, (3,3), activation='relu', input_shape=(3, 32, 32)))
model2.add(Dropout(.25))
model2.add(Conv2D(16, (3,3), activation='relu'))
model2.add(Dropout(.25))
model2.add(MaxPooling2D((2,2)))
model2.add(Conv2D(16, (3,3), activation='relu', input_shape=(3, 32, 32)))
model2.add(Dropout(.25))
model2.add(Conv2D(32, (3,3), activation='relu'))
model2.add(Dropout(.25))
model2.add(MaxPooling2D((2,2)))
model2.add(Flatten())
model2.add(Dense(64, activation='relu'))
model2.add(Dropout(0.5))
model2.add(Dense(2, activation='softmax'))
model2.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model2.summary()
```
## Fit your model
Fit your model and save it to a new variable so that we can access the .history value to make a plot of our training and validation accuracies by epoch.
```
model2_training = model2.fit(x_train, y_train, epochs=50, batch_size=128, validation_split=0.1)
```
## Plot Training and Validation Accuracies
Use your matplotlib skills to give us a nice line graph of both training and validation accuracies as the number of epochs increases. Don't forget your legend, axis and plot title.
```
train_val_metrics(50, model2_training)
```
The model continues to find loss and accuracy improvements, suggesting that it could be trained for more epochs.

# Model #3
Finally, one more set of convolutional/activation/pooling:
* Conv2D - kernel_size = (3,3)
* Relu Activation
* Conv2D - kernel_size = (3,3)
* Relu Activation
* Max Pooling - pool_size = (2,2)
* Dropout - use .25 for all layers but the final layer
---
* Conv2D - kernel_size = (3,3)
* Relu Activation
* Conv2D - kernel_size = (3,3)
* Relu Activation
* Max Pooling - pool_size = (2,2)
* Dropout - use .25 for all layers but the final layer
---
* Conv2D - kernel_size = (3,3)
* Relu Activation
* Conv2D - kernel_size = (3,3)
* Relu Activation
* Max Pooling - pool_size = (2,2)
* Dropout - use .25 for all layers but the final layer
---
* Flatten
* Fully-Connected (Dense)
* Dropout - use .5 this time
* Fully-Connected (Dense layer where # neurons = # final classes/labels)
Again, compile the model using categorical_crossentropy as your loss metric and use the Adam optimizer, and accuracy as your overall scoring metric.
```
model3 = Sequential()
model3.add(Conv2D(8, (3,3), activation='relu', input_shape=(3, 32, 32)))
model3.add(Dropout(.25))
model3.add(Conv2D(16, (3,3), activation='relu'))
model3.add(Dropout(.25))
model3.add(MaxPooling2D((2,2), strides=1))
model3.add(Conv2D(16, (3,3), activation='relu', input_shape=(3, 32, 32)))
model3.add(Dropout(.25))
model3.add(Conv2D(32, (3,3), activation='relu'))
model3.add(Dropout(.25))
model3.add(MaxPooling2D((2,2), strides=1))
model3.add(Conv2D(32, (3,3), activation='relu', input_shape=(3, 32, 32)))
model3.add(Dropout(.25))
model3.add(Conv2D(64, (3,3), activation='relu'))
model3.add(Dropout(.25))
model3.add(MaxPooling2D(2,2))
model3.add(Flatten())
model3.add(Dense(128, activation='relu'))
model3.add(Dropout(0.5))
model3.add(Dense(2, activation='softmax'))
model3.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model3.summary()
```
## Fit your model
Fit your model and save it to a new variable so that we can access the .history value to make a plot of our training and validation accuracies by epoch.
```
model3_training = model3.fit(x_train, y_train, epochs=50, batch_size=128, validation_split=0.1)
```
## Plot Training and Validation Accuracies
Use your matplotlib skills to give us a nice line graph of both training and validation accuracies as the number of epochs increases. Don't forget your legend, axis and plot title.
```
train_val_metrics(50, model3_training)
```
# Stretch Goal:
## Use other classes from Cifar10
Try using different classes from the Cifar10 dataset or use all 10. You might need to sample the training data or limit the number of epochs if you decide to use the entire dataset due to processing constraints.
## Hyperparameter Tune Your Model
If you have successfully complete shown how increasing the depth of a neural network can improve its accuracy, and you feel like you have a solid understanding of all of the different parts of CNNs, try hyperparameter tuning your strongest model to see how much additional accuracy you can squeeze out of it. This will also give you a chance to research the different hyperparameters as well as their significance/purpose. (There are lots and lots)
---
Here's a helpful article that will show you how to get started using GridSearch to hyperaparameter tune your CNN. (should you desire to use that method):
[Grid Search Hyperparameters for Deep Learning Models in Python With Keras](https://machinelearningmastery.com/grid-search-hyperparameters-deep-learning-models-python-keras/)
|
github_jupyter
|
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# The Keras Functional API in TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/alpha/guide/keras/functional"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/guide/keras/functional.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/guide/keras/functional.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
## Setup
```
!pip install pydot
!apt-get install graphviz
from __future__ import absolute_import, division, print_function
!pip install tensorflow-gpu==2.0.0-alpha0
import tensorflow as tf
tf.keras.backend.clear_session() # For easy reset of notebook state.
```
## Introduction
You're already familiar with the use of `keras.Sequential()` to create models.
The Functional API is a way to create models that is more flexible than `Sequential`:
it can handle models with non-linear topology, models with shared layers,
and models with multiple inputs or outputs.
It's based on the idea that a deep learning model
is usually a directed acyclic graph (DAG) of layers.
The Functional API a set of tools for **building graphs of layers**.
Consider the following model:
```
(input: 784-dimensional vectors)
↧
[Dense (64 units, relu activation)]
↧
[Dense (64 units, relu activation)]
↧
[Dense (10 units, softmax activation)]
↧
(output: probability distribution over 10 classes)
```
It's a simple graph of 3 layers.
To build this model with the functional API,
you would start by creating an input node:
```
from tensorflow import keras
inputs = keras.Input(shape=(784,))
```
Here we just specify the shape of our data: 784-dimensional vectors.
None that the batch size is always omitted, we only specify the shape of each sample.
For an input meant for images of shape `(32, 32, 3)`, we would have used:
```
img_inputs = keras.Input(shape=(32, 32, 3))
```
What gets returned, `inputs`, contains information about the shape and dtype of the
input data that you expect to feed to your model:
```
inputs.shape
inputs.dtype
```
You create a new node in the graph of layers by calling a layer on this `inputs` object:
```
from tensorflow.keras import layers
dense = layers.Dense(64, activation='relu')
x = dense(inputs)
```
The "layer call" action is like drawing an arrow from "inputs" to this layer we created.
We're "passing" the inputs to the `dense` layer, and out we get `x`.
Let's add a few more layers to our graph of layers:
```
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
```
At this point, we can create a `Model` by specifying its inputs and outputs in the graph of layers:
```
model = keras.Model(inputs=inputs, outputs=outputs)
```
To recap, here is our full model definition process:
```
inputs = keras.Input(shape=(784,), name='img')
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name='mnist_model')
```
Let's check out what the model summary looks like:
```
model.summary()
```
We can also plot the model as a graph:
```
keras.utils.plot_model(model, 'my_first_model.png')
```
And optionally display the input and output shapes of each layer in the plotted graph:
```
keras.utils.plot_model(model, 'my_first_model_with_shape_info.png', show_shapes=True)
```
This figure and the code we wrote are virtually identical. In the code version,
the connection arrows are simply replaced by the call operation.
A "graph of layers" is a very intuitive mental image for a deep learning model,
and the functional API is a way to create models that closely mirrors this mental image.
## Training, evaluation, and inference
Training, evaluation, and inference work exactly in the same way for models built
using the Functional API as for Sequential models.
Here is a quick demonstration.
Here we load MNIST image data, reshape it into vectors,
fit the model on the data (while monitoring performance on a validation split),
and finally we evaluate our model on the test data:
```
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop(),
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=64,
epochs=5,
validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])
```
For a complete guide about model training and evaluation, see [Guide to Training & Evaluation](./training_and_evaluation.ipynb).
## Saving and serialization
Saving and serialization work exactly in the same way for models built
using the Functional API as for Sequential models.
To standard way to save a Functional model is to call `model.save()` to save the whole model into a single file.
You can later recreate the same model from this file, even if you no longer have access to the code
that created the model.
This file includes:
- The model's architecture
- The model's weight values (which were learned during training)
- The model's training config (what you passed to `compile`), if any
- The optimizer and its state, if any (this enables you to restart training where you left off)
```
model.save('path_to_my_model.h5')
del model
# Recreate the exact same model purely from the file:
model = keras.models.load_model('path_to_my_model.h5')
```
For a complete guide about model saving, see [Guide to Saving and Serializing Models](./saving_and_serializing.ipynb).
## Using the same graph of layers to define multiple models
In the functional API, models are created by specifying their inputs
and outputs in a graph of layers. That means that a single graph of layers
can be used to generate multiple models.
In the example below, we use the same stack of layers to instantiate two models:
an `encoder` model that turns image inputs into 16-dimensional vectors,
and an end-to-end `autoencoder` model for training.
```
encoder_input = keras.Input(shape=(28, 28, 1), name='img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')
autoencoder.summary()
```
Note that we make the decoding architecture strictly symmetrical to the encoding architecture,
so that we get an output shape that is the same as the input shape `(28, 28, 1)`.
The reverse of a `Conv2D` layer is a `Conv2DTranspose` layer, and the reverse of a `MaxPooling2D`
layer is an `UpSampling2D` layer.
## All models are callable, just like layers
You can treat any model as if it were a layer, by calling it on an `Input` or on the output of another layer.
Note that by calling a model you aren't just reusing the architecture of the model, you're also reusing its weights.
Let's see this in action. Here's a different take on the autoencoder example that creates an encoder model, a decoder model,
and chain them in two calls to obtain the autoencoder model:
```
encoder_input = keras.Input(shape=(28, 28, 1), name='original_img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
decoder_input = keras.Input(shape=(16,), name='encoded_img')
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
decoder = keras.Model(decoder_input, decoder_output, name='decoder')
decoder.summary()
autoencoder_input = keras.Input(shape=(28, 28, 1), name='img')
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name='autoencoder')
autoencoder.summary()
```
As you can see, model can be nested: a model can contain submodels (since a model is just like a layer).
A common use case for model nesting is *ensembling*.
As an example, here's how to ensemble a set of models into a single model that averages their predictions:
```
def get_model():
inputs = keras.Input(shape=(128,))
outputs = layers.Dense(1, activation='sigmoid')(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
```
## Manipulating complex graph topologies
### Models with multiple inputs and outputs
The functional API makes it easy to manipulate multiple inputs and outputs.
This cannot be handled with the Sequential API.
Here's a simple example.
Let's say you're building a system for ranking custom issue tickets by priority and routing them to the right department.
You model will have 3 inputs:
- Title of the ticket (text input)
- Text body of the ticket (text input)
- Any tags added by the user (categorical input)
It will have two outputs:
- Priority score between 0 and 1 (scalar sigmoid output)
- The department that should handle the ticket (softmax output over the set of departments)
Let's built this model in a few lines with the Functional API.
```
num_tags = 12 # Number of unique issue tags
num_words = 10000 # Size of vocabulary obtained when preprocessing text data
num_departments = 4 # Number of departments for predictions
title_input = keras.Input(shape=(None,), name='title') # Variable-length sequence of ints
body_input = keras.Input(shape=(None,), name='body') # Variable-length sequence of ints
tags_input = keras.Input(shape=(num_tags,), name='tags') # Binary vectors of size `num_tags`
# Embed each word in the title into a 64-dimensional vector
title_features = layers.Embedding(num_words, 64)(title_input)
# Embed each word in the text into a 64-dimensional vector
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce sequence of embedded words in the title into a single 128-dimensional vector
title_features = layers.LSTM(128)(title_features)
# Reduce sequence of embedded words in the body into a single 32-dimensional vector
body_features = layers.LSTM(32)(body_features)
# Merge all available features into a single large vector via concatenation
x = layers.concatenate([title_features, body_features, tags_input])
# Stick a logistic regression for priority prediction on top of the features
priority_pred = layers.Dense(1, activation='sigmoid', name='priority')(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, activation='softmax', name='department')(x)
# Instantiate an end-to-end model predicting both priority and department
model = keras.Model(inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred])
```
Let's plot the model:
```
keras.utils.plot_model(model, 'multi_input_and_output_model.png', show_shapes=True)
```
When compiling this model, we can assign different losses to each output.
You can even assign different weights to each loss, to modulate their
contribution to the total training loss.
```
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss=['binary_crossentropy', 'categorical_crossentropy'],
loss_weights=[1., 0.2])
```
Since we gave names to our output layers, we could also specify the loss like this:
```
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss={'priority': 'binary_crossentropy',
'department': 'categorical_crossentropy'},
loss_weights=[1., 0.2])
```
We can train the model by passing lists of Numpy arrays of inputs and targets:
```
import numpy as np
# Dummy input data
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype('float32')
# Dummy target data
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))
model.fit({'title': title_data, 'body': body_data, 'tags': tags_data},
{'priority': priority_targets, 'department': dept_targets},
epochs=2,
batch_size=32)
```
When calling fit with a `Dataset` object, it should yield either a
tuple of lists like `([title_data, body_data, tags_data], [priority_targets, dept_targets])`
or a tuple of dictionaries like
`({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets})`.
For more detailed explanation, refer to the complete guide [Guide to Training & Evaluation](./training_and_evaluation.ipynb).
### A toy resnet model
In addition to models with multiple inputs and outputs,
the Functional API makes it easy to manipulate non-linear connectivity topologies,
that is to say, models where layers are not connected sequentially.
This also cannot be handled with the Sequential API (as the name indicates).
A common use case for this is residual connections.
Let's build a toy ResNet model for CIFAR10 to demonstrate this.
```
inputs = keras.Input(shape=(32, 32, 3), name='img')
x = layers.Conv2D(32, 3, activation='relu')(inputs)
x = layers.Conv2D(64, 3, activation='relu')(x)
block_1_output = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_1_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_2_output = layers.add([x, block_1_output])
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_2_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_3_output = layers.add([x, block_2_output])
x = layers.Conv2D(64, 3, activation='relu')(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation='relu')(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs, outputs, name='toy_resnet')
model.summary()
```
Let's plot the model:
```
keras.utils.plot_model(model, 'mini_resnet.png', show_shapes=True)
```
Let's train it:
```
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss='categorical_crossentropy',
metrics=['acc'])
model.fit(x_train, y_train,
batch_size=64,
epochs=1,
validation_split=0.2)
```
## Sharing layers
Another good use for the functional API are models that use shared layers. Shared layers are layer instances that get reused multiple times in a same model: they learn features that correspond to multiple paths in the graph-of-layers.
Shared layers are often used to encode inputs that come from similar spaces (say, two different pieces of text that feature similar vocabulary), since they enable sharing of information across these different inputs, and they make it possible to train such a model on less data. If a given word is seen in one of the inputs, that will benefit the processing of all inputs that go through the shared layer.
To share a layer in the Functional API, just call the same layer instance multiple times. For instance, here's an `Embedding` layer shared across two different text inputs:
```
# Embedding for 1000 unique words mapped to 128-dimensional vectors
shared_embedding = layers.Embedding(1000, 128)
# Variable-length sequence of integers
text_input_a = keras.Input(shape=(None,), dtype='int32')
# Variable-length sequence of integers
text_input_b = keras.Input(shape=(None,), dtype='int32')
# We reuse the same layer to encode both inputs
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)
```
## Extracting and reusing nodes in the graph of layers
Because the graph of layers you are manipulating in the Functional API is a static datastructure, it can be accessed and inspected. This is how we are able to plot Functional models as images, for instance.
This also means that we can access the activations of intermediate layers ("nodes" in the graph) and reuse them elsewhere. This is extremely useful for feature extraction, for example!
Let's look at an example. This is a VGG19 model with weights pre-trained on ImageNet:
```
from tensorflow.keras.applications import VGG19
vgg19 = VGG19()
```
And these are the intermediate activations of the model, obtained by querying the graph datastructure:
```
features_list = [layer.output for layer in vgg19.layers]
```
We can use these features to create a new feature-extraction model, that returns the values of the intermediate layer activations -- and we can do all of this in 3 lines.
```
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype('float32')
extracted_features = feat_extraction_model(img)
```
This comes in handy when [implementing neural style transfer](https://medium.com/tensorflow/neural-style-transfer-creating-art-with-deep-learning-using-tf-keras-and-eager-execution-7d541ac31398), among other things.
## Extending the API by writing custom layers
tf.keras has a wide range of built-in layers. Here are a few examples:
- Convolutional layers: `Conv1D`, `Conv2D`, `Conv3D`, `Conv2DTranspose`, etc.
- Pooling layers: `MaxPooling1D`, `MaxPooling2D`, `MaxPooling3D`, `AveragePooling1D`, etc.
- RNN layers: `GRU`, `LSTM`, `ConvLSTM2D`, etc.
- `BatchNormalization`, `Dropout`, `Embedding`, etc.
If you don't find what you need, it's easy to extend the API by creating your own layers.
All layers subclass the `Layer` class and implement:
- A `call` method, that specifies the computation done by the layer.
- A `build` method, that creates the weights of the layer (note that this is just a style convention; you could create weights in `__init__` as well).
To learn more about creating layers from scratch, check out the guide [Guide to writing layers and models from scratch](./custom_layers_and_models.ipynb).
Here's a simple implementation of a `Dense` layer:
```
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
```
If you want your custom layer to support serialization, you should also define a `get_config` method,
that returns the constructor arguments of the layer instance:
```
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {'units': self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(
config, custom_objects={'CustomDense': CustomDense})
```
Optionally, you could also implement the classmethod `from_config(cls, config)`, which is in charge of recreating a layer instance given its config dictionary. The default implementation of `from_config` is:
```python
def from_config(cls, config):
return cls(**config)
```
## When to use the Functional API
How to decide whether to use the Functional API to create a new model, or just subclass the `Model` class directly?
In general, the Functional API is higher-level, easier & safer to use, and has a number of features that subclassed Models do not support.
However, Model subclassing gives you greater flexibility when creating models that are not easily expressible as directed acyclic graphs of layers (for instance, you could not implement a Tree-RNN with the Functional API, you would have to subclass `Model` directly).
### Here are the strengths of the Functional API:
The properties listed below are all true for Sequential models as well (which are also data structures), but they aren't true for subclassed models (which are Python bytecode, not data structures).
#### It is less verbose.
No `super(MyClass, self).__init__(...)`, no `def call(self, ...):`, etc.
Compare:
```python
inputs = keras.Input(shape=(32,))
x = layers.Dense(64, activation='relu')(inputs)
outputs = layers.Dense(10)(x)
mlp = keras.Model(inputs, outputs)
```
With the subclassed version:
```python
class MLP(keras.Model):
def __init__(self, **kwargs):
super(MLP, self).__init__(**kwargs)
self.dense_1 = layers.Dense(64, activation='relu')
self.dense_2 = layers.Dense(10)
def call(self, inputs):
x = self.dense_1(inputs)
return self.dense_2(x)
# Instantiate the model.
mlp = MLP()
# Necessary to create the model's state.
# The model doesn't have a state until it's called at least once.
_ = mlp(tf.zeros((1, 32)))
```
#### It validates your model while you're defining it.
In the Functional API, your input specification (shape and dtype) is created in advance (via `Input`), and every time you call a layer, the layer checks that the specification passed to it matches its assumptions, and it will raise a helpful error message if not.
This guarantees that any model you can build with the Functional API will run. All debugging (other than convergence-related debugging) will happen statically during the model construction, and not at execution time. This is similar to typechecking in a compiler.
#### Your Functional model is plottable and inspectable.
You can plot the model as a graph, and you can easily access intermediate nodes in this graph -- for instance, to extract and reuse the activations of intermediate layers, as we saw in a previous example:
```python
features_list = [layer.output for layer in vgg19.layers]
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
```
#### Your Functional model can be serialized or cloned.
Because a Functional model is a data structure rather than a piece of code, it is safely serializable and can be saved as a single file that allows you to recreate the exact same model without having access to any of the original code. See our [saving and serialization guide](./saving_and_serializing.ipynb) for more details.
### Here are the weaknesses of the Functional API:
#### It does not support dynamic architectures.
The Functional API treats models as DAGs of layers. This is true for most deep learning architectures, but not all: for instance, recursive networks or Tree RNNs do not follow this assumption and cannot be implemented in the Functional API.
#### Sometimes, you just need to write everything from scratch.
When writing advanced achitectures, you may want to do things that are outside the scope of "defining a DAG of layers": for instance, you may want to expose multiple custom training and inference methods on your model instance. This requires subclassing.
---
To dive more in-depth into the differences between the Functional API and Model subclassing, you can read [What are Symbolic and Imperative APIs in TensorFlow 2.0?](https://medium.com/tensorflow/what-are-symbolic-and-imperative-apis-in-tensorflow-2-0-dfccecb01021).
## Mix-and-matching different API styles
Importantly, choosing between the Functional API or Model subclassing isn't a binary decision that restricts you to one category of models. All models in the tf.keras API can interact with each, whether they're Sequential models, Functional models, or subclassed Models/Layers written from scratch.
You can always use a Functional model or Sequential model as part of a subclassed Model/Layer:
```
units = 32
timesteps = 10
input_dim = 5
# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs, outputs)
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
# Our previously-defined Functional model
self.classifier = model
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
print(features.shape)
return self.classifier(features)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, timesteps, input_dim)))
```
Inversely, you can use any subclassed Layer or Model in the Functional API as long as it implements a `call` method that follows one of the following patterns:
- `call(self, inputs, **kwargs)` where `inputs` is a tensor or a nested structure of tensors (e.g. a list of tensors), and where `**kwargs` are non-tensor arguments (non-inputs).
- `call(self, inputs, training=None, **kwargs)` where `training` is a boolean indicating whether the layer should behave in training mode and inference mode.
- `call(self, inputs, mask=None, **kwargs)` where `mask` is a boolean mask tensor (useful for RNNs, for instance).
- `call(self, inputs, training=None, mask=None, **kwargs)` -- of course you can have both masking and training-specific behavior at the same time.
In addition, if you implement the `get_config` method on your custom Layer or Model, the Functional models you create with it will still be serializable and clonable.
Here's a quick example where we use a custom RNN written from scratch in a Functional model:
```
units = 32
timesteps = 10
input_dim = 5
batch_size = 16
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
self.classifier = layers.Dense(1, activation='sigmoid')
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
return self.classifier(features)
# Note that we specify a static batch size for the inputs with the `batch_shape`
# arg, because the inner computation of `CustomRNN` requires a static batch size
# (when we create the `state` zeros tensor).
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
x = layers.Conv1D(32, 3)(inputs)
outputs = CustomRNN()(x)
model = keras.Model(inputs, outputs)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, 10, 5)))
```
This concludes our guide on the Functional API!
Now you have at your fingertips a powerful set of tools for building deep learning models.
|
github_jupyter
|
# Various Routines to Harvest CRIM Metadata from Production Server
### Just the basics here, allowing interaction with "request" as a way to retrieve individual Observations and Relationships
```
import requests
import pandas as pd
```
# Variables
Now we can set a variable, in this case the URL of a single Observation in CRIM
```
Obs_url = "https://crimproject.org/data/observations/2/"
```
And if we call for that variable, it will tell us what it is:
```
Obs_url
```
# Requests
Now defining a new variable, which itself is a "get request" for our first variable:
```
response = requests.get(Obs_url)
type(response)
```
And now the json representation of that variable:
```
Obs_json = response.json()
Obs_json
```
# Json, Dictionaries, Keys and Values
Json is in fact an elaborate dictionary, with items nested in an order.
```
type(Obs_json)
```
We can list the fixed "keys" for that JSON, which are in turned paired with "values".
```
Obs_json.keys()
```
And here we are after the value of just ONE key
```
Obs_ema = Obs_json["ema"]
Obs_ema
```
It has a data type: string
```
type(Obs_ema)
```
Now calling for various other values for other keys:
```
Obs_json["musical_type"]
Obs_mt = Obs_json["musical_type"]
Obs_mt
```
The piece key actually is a dictionary within a dictionary, so it has LOTS of keys and values within it.
```
Obs_piece = Obs_json["piece"]
Obs_piece
```
And to interact with the items there, we need to call for a key *within* that key.
```
Obs_mei = Obs_piece["mei_links"]
Obs_mei
```
Various ways of calling for items according to their position. Note: Zero-based indexing!
```
len(Obs_mei)
Obs_mei[0]
Obs_json["piece"]["mei_links"][0]
Obs_json["ema"]
def get_ema_for_observation_id(obs_id):
# get Obs_url
url = "https://crimproject.org/data/observations/{}/".format(obs_id)
return url
def get_ema_for_observation_id(obs_id):
# get Obs_ema
my_ema_mei_dictionary = dict()
url = "https://crimproject.org/data/observations/{}/".format(obs_id)
response = requests.get(url)
Obs_json = response.json()
# Obs_ema = Obs_json["ema"]
my_ema_mei_dictionary["id"]=Obs_json["id"]
my_ema_mei_dictionary["musical type"]=Obs_json["musical_type"]
my_ema_mei_dictionary["int"]=Obs_json["mt_fg_int"]
my_ema_mei_dictionary["tint"]=Obs_json["mt_fg_tint"]
my_ema_mei_dictionary["ema"]=Obs_json["ema"]
my_ema_mei_dictionary["mei"]=Obs_json["piece"]["mei_links"][0]
my_ema_mei_dictionary["pdf"]=Obs_json["piece"]["pdf_links"][0]
# Obs_piece = Obs_json["piece"]
# Obs_mei = Obs_piece["mei_links"]
print(f'Got: {obs_id}')
# return {"ema":Obs_ema,"mei":Obs_mei}
return my_ema_mei_dictionary
```
Now we get a _particular_ observation.
```
get_ema_for_observation_id(20)
```
A new variable that contains the "get_ema" routine. We will pass a series of numbers to it.
```
output = get_ema_for_observation_id(20)
# this holds the output as a LIST of DICTS
obs_data_list = []
# this is the list of Observation IDs to call
obs_call_list = [1,3,5,17,21]
# this is the LOOP that runs through the list aboe
# for observ in obs_call_list:
for observ in range(1,11):
call_list_output = get_ema_for_observation_id(observ)
# the print command simply puts the output in the notebook terminal.
#Later we will put it in the List of Dicts.
# print(call_list_output) The APPEND function adds one item after each loop.
obs_data_list.append(call_list_output)
# list includes APPEND function that will allow us to add one item after each loop.
# EX blank_list = [1,5,6] (note that these are in square brackets as LIST)
# blank_list.append(89)
# range would in parenths as in: range(1,11)
# here we make a LIST object that contains the Range.
# This allows it to iterate over the range
# since the range could be HUGE We can ONLY append a number to a LIST!
Obs_range = list(range(1,11))
```
Now we call up the list of observations we created above, after appending one at a time to the "[]"
```
obs_data_list
```
# Pandas as Data Frame or CSV
```
pd.Series(obs_data_list).to_csv("obs_data_list.csv")
# Pandas DataFrame interprets the series of items in each Dict
# as separate 'cells' (a tab structure)
DF_output = pd.DataFrame(obs_data_list)
DF_output
DF_output.to_csv("obs_data_list.csv")
# two "==" means check for equality
# for 'contains' use str.contains("letter")
# can also use regex in this (for EMA range)
# Filter_by_Type = (DF_output["musical type"]=="Fuga") & (DF_output["id"]==8)
Filter_by_Type = DF_output["musical type"].str.contains("Fuga")
#
DF_output[Filter_by_Type]
```
|
github_jupyter
|
#Sheet Copy
Copy tab from a sheet to a sheet.
#License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
#Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- **Command**: "python starthinker_ui/manage.py colab"
- **Command**: "python starthinker/tools/colab.py [JSON RECIPE]"
#1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
```
!pip install git+https://github.com/google/starthinker
```
#2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
1. If the recipe uses a Google Cloud Project:
- Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md).
1. If the recipe has **auth** set to **user**:
- If you have user credentials:
- Set the configuration **user** value to your user credentials JSON.
- If you DO NOT have user credentials:
- Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md).
1. If the recipe has **auth** set to **service**:
- Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md).
```
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
```
#3. Enter Sheet Copy Recipe Parameters
1. Provide the full edit URL for both sheets.
1. Provide the tab name for both sheets.
1. The tab will only be copied if it does not already exist.
Modify the values below for your use case, can be done multiple times, then click play.
```
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'from_sheet': '',
'from_tab': '',
'to_sheet': '',
'to_tab': '',
}
print("Parameters Set To: %s" % FIELDS)
```
#4. Execute Sheet Copy
This does NOT need to be modified unless you are changing the recipe, click play.
```
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'sheets': {
'auth': 'user',
'template': {
'sheet': {'field': {'name': 'from_sheet', 'kind': 'string', 'order': 1, 'default': ''}},
'tab': {'field': {'name': 'from_tab', 'kind': 'string', 'order': 2, 'default': ''}}
},
'sheet': {'field': {'name': 'to_sheet', 'kind': 'string', 'order': 3, 'default': ''}},
'tab': {'field': {'name': 'to_tab', 'kind': 'string', 'order': 4, 'default': ''}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/spyrosviz/Injury_Prediction_MidLong_Distance_Runners/blob/main/ML%20models/Models_Runners_Injury_Prediction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Import Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.ensemble import GradientBoostingClassifier, BaggingClassifier
from xgboost.sklearn import XGBClassifier
from sklearn.calibration import CalibratedClassifierCV
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV, StratifiedKFold
from sklearn.metrics import accuracy_score, confusion_matrix, roc_auc_score
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import MinMaxScaler
import itertools
from collections import Counter
!pip install imbalanced-learn
from imblearn.over_sampling import SMOTE, RandomOverSampler, ADASYN
from imblearn.under_sampling import RandomUnderSampler, TomekLinks
import tensorflow as tf
```
**Use the following split if you want to hold out a specified number of athletes for train and test set. The last 10 athletes instances were kept for test set.**
```
'''Import data and hold out a specified test set'''
# Import data from excel, select the first 63 athletes events for train set and the last 10 athletes for test set
df = pd.read_excel(r'/content/drive/MyDrive/Runners_Injury_MLproject/Daily_Injury_Clean.xlsx',index_col = [0])
df_train = df[df['Athlete ID'] <= 63]
df_train.drop(['Date','Athlete ID'],axis=1,inplace=True)
df_test = df[df['Athlete ID'] > 63]
df_test.drop(['Date','Athlete ID'],axis=1,inplace=True)
# Check if df_train has any equal instances with df_test. We expect to return an empty dataframe if they do not share common instances
print(df_train[df_test.eq(df_train).all(axis=1)==True])
''' Set y '''
y_train = df_train['injury'].values
y_test = df_test['injury'].values
''' Set all columns for X except injury which is the target'''
X_train = df_train.drop(['injury'],axis=1).values
X_test = df_test.drop(['injury'],axis=1).values
column_names = df_train.drop(['injury'],axis=1).columns
#selected_features = ['Total Weekly Distance','Acute Load','Strain','Monotony','injury']
''' Set X after dropping selected features '''
#X_test = df_test.drop(selected_features,axis=1).values
#X_train = df_train.drop(selected_features,axis=1).values
#column_names = df_train.drop(selected_features,axis=1).columns
''' Set selected features as X '''
#X_train = df_train.loc[:,selected_features].values
#X_test = df_test.loc[:,selected_features].values
#column_names = df_train.loc[:,selected_features].columns
# Print dataframes shapes and respective number of healthy and injury events
print(column_names)
print(Counter(df_train['injury'].values))
print(Counter(df_test['injury'].values))
```
**Use the following dataset split if you want to hold out 2000 random healthy instancies and 50 random injury instancies**
```
'''Import data and holdout a random test set'''
# Import data from excel and drop Date and Athlete ID column
df = pd.read_excel(r'/content/drive/MyDrive/Runners_Injury_MLproject/run_injur_with_acuteloads.xlsx',index_col = [0])
# Hold out a test set with 100 random injury events and 100 random healthy events
df_copy = df.copy()
df_copy.drop(['Date','Athlete ID'],axis=1,inplace=True)
df_inj = df_copy[df_copy['injury']==1].sample(50,random_state=42)
df_uninj = df_copy[df_copy['injury']==0].sample(2000,random_state=42)
df_test = pd.concat([df_inj,df_uninj],ignore_index=True)
# Drop the test set from the original dataframe
df_train = pd.concat([df_copy,df_test],ignore_index=True).drop_duplicates(keep=False)
# Set X and y
y_train = df_train['injury'].values
y_test = df_test['injury'].values
selected_features = ['Total Weekly Distance','Acute Load','Strain','Monotony','injury']
X_test = df_test.drop(selected_features,axis=1).values
X_train = df_train.drop(selected_features,axis=1).values
#X_train = df_train.loc[:,selected_features].values
#X_test = df_test.loc[:,selected_features].values
# Check if df_train has any equal instances with df_test. We expect to return an empty dataframe if they do not share common instances
# Print dataframe shapes and respective number of healthy and injury events
print(df_train[df_test.eq(df_train).all(axis=1)==True])
#print(df_train.drop(['Acute Load','Total Weekly Distance','Monotony','Strain','injury'],axis=1).columns)
print(df_train.shape)
print(Counter(df_train['injury'].values))
print(df_test.shape)
print(Counter(df_test['injury'].values))
class_imbalance = len(df_train[df_train['injury']==1].values)/len(df_train[df_train['injury']==0].values)
print(f'Class imbalance is {class_imbalance}')
```
**Write a function to pretiffy confusion matrix results.
The function was found from Daniel Bourke's Tensorflow course**
```
def plot_confusion_matrix(y_true,y_pred,class_names,figsize=(10,10),text_size=15):
# create the confusion matrix
cm = confusion_matrix(y_true,y_pred)
cm_norm = cm.astype('float') / cm.sum(axis=1)[:,np.newaxis] # normalize confusion matrix
n_classes = cm.shape[0]
fig, ax = plt.subplots(figsize=figsize)
matrix_plot = ax.matshow(cm, cmap=plt.cm.Blues)
fig.colorbar(matrix_plot)
# Set labels to be classes
if class_names:
labels = class_names
else:
labels = np.arange(cm.shape[0])
# Label the axes
ax.set(title='Confusion Matrix',
xlabel = 'Predicted Label',
ylabel = 'True Label',
xticks = np.arange(n_classes),
yticks = np.arange(n_classes),
xticklabels = labels,
yticklabels = labels)
# Set x axis labels to bottom
ax.xaxis.set_label_position('bottom')
ax.xaxis.tick_bottom()
# Adjust label size
ax.yaxis.label.set_size(text_size)
ax.xaxis.label.set_size(text_size)
ax.title.set_size(text_size)
# Set threshold for different colors
threshold = (cm.max() + cm.min()) / 2
# Plot the text on each cell
for i, j in itertools.product(range(cm.shape[0]),range(cm.shape[1])):
plt.text(j,i,f'{cm[i,j]} ({cm_norm[i,j] * 100:.1f}%)',
horizontalalignment='center',
color='white' if cm[i,j] > threshold else 'black',
size = text_size)
```
Because there is very high class imbalance in the injury variable that we want to predict, we will try the following techniques to overcome this problem and see what works best:
* **Weighted XGBoost**
* **XGBoost with Smote algorithm for Resampling**
* **XGBoost model with Random Resampling**
* **Bagging XGBoost model with Random Resampling**
* **Neural Networks model with Random Undersampling**
```
# Set X and y with different resampling methods
'''SMOTE algorithm for oversampling 15% ratio and random undersampling 1-1 ratio'''
# Oversample the minority class to have number of instances equal with the 15% of the majority class
smote = SMOTE(sampling_strategy=0.15,random_state=1)
X_sm,y_sm = smote.fit_resample(X_train,y_train)
# Downsample the majority class to have number of instances equal with the minority class
undersamp = RandomUnderSampler(sampling_strategy=1,random_state=1)
X_smus,y_smus = undersamp.fit_resample(X_sm,y_sm)
'''Random oversampling 10% ratio and random undersampling 1-1 ratio'''
# Random over sampler for minority class to 1:10 class ratio
ros = RandomOverSampler(sampling_strategy=0.1,random_state=21)
X_ros,y_ros = ros.fit_resample(X_train,y_train)
# Undersample the majority class to have number of instances equal with the minority class
undersamp = RandomUnderSampler(sampling_strategy=1,random_state=21)
X_rosus,y_rosus = undersamp.fit_resample(X_ros,y_ros)
'''Random undersampling 1-1 ratio'''
# Random under sampler for majority class to 1:1 class ratio
rus = RandomUnderSampler(sampling_strategy=1,random_state=21)
X_rus,y_rus = rus.fit_resample(X_train,y_train)
'''Tomek Links Undersampling'''
tmkl = TomekLinks()
X_tmk, y_tmk = tmkl.fit_resample(X_train,y_train)
'''ADASYN for oversampling 15% ratio and random undersampler 1-1 ratio'''
# ADASYN oversample minority class to 15% of the majority class
adasyn = ADASYN(sampling_strategy=0.15,random_state=21)
X_ada, y_ada = adasyn.fit_resample(X_train,y_train)
# Random undersample the majority class to have equal instances with minority class
adarus = RandomUnderSampler(sampling_strategy=1,random_state=21)
X_adarus,y_adarus = adarus.fit_resample(X_ada,y_ada)
# Stratify crossvalidation
cv = StratifiedKFold(n_splits=5,shuffle=True,random_state=21)
```
## 1) Weighted XGBoost Model
```
'''Weighted XGBoost'''
# We will use scale_pos_weight argument in xgboost algorithm which increases the error for wrong positive class prediction.
# From xgboost documentation it's suggested that the optimal value for scale_pos_weight argument is usually around the
# sum(negative instances)/sum(positive instances). We will use randomizedsearchcv to find optimal value
xgb_weight = XGBClassifier()
param_grid_weight = {"gamma":[0.01,0.1,1,10,50,100,1000],'reg_lambda':[1,5,10,20],
'learning_rate':np.arange(0.01,1,0.01),'eta':np.arange(0.1,1,0.1),'scale_pos_weight':[60,70,80,90,100]}
gscv_weight = RandomizedSearchCV(xgb_weight,param_distributions=param_grid_weight,cv=cv,scoring='roc_auc')
gscv_weight.fit(X_train,y_train)
print("Best param is {}".format(gscv_weight.best_params_))
print("Best score is {}".format(gscv_weight.best_score_))
optimal_gamma = gscv_weight.best_params_['gamma']
optimal_reg_lambda = gscv_weight.best_params_['reg_lambda']
optim_lr = gscv_weight.best_params_['learning_rate']
optimal_eta = gscv_weight.best_params_['eta']
optimal_scale_pos_weight = gscv_weight.best_params_['scale_pos_weight']
tuned_xgb_weight = XGBClassifier(gamma=optimal_gamma,learning_rate=optim_lr,eta=optimal_eta,reg_lambda=optimal_reg_lambda,scale_pos_weight=optimal_scale_pos_weight,
colsample_bytree=0.5,min_child_weight=90,objective='binary:logistic',subsample=0.5)
tuned_xgb_weight.fit(X_train,y_train,early_stopping_rounds=10,eval_metric='auc',eval_set=[(X_test,y_test)])
# Evaluate model's performance on the test set, with AUC, confusion matrix, sensitivity and specificity
y_pred = tuned_xgb_weight.predict(X_test)
print(f'Area under curve score is {roc_auc_score(y_test,tuned_xgb_weight.predict_proba(X_test)[:,1])}')
# Compute true positives, true neagatives, false negatives and false positives
tp = confusion_matrix(y_test,y_pred)[1,1]
tn = confusion_matrix(y_test,y_pred)[0,0]
fn = confusion_matrix(y_test,y_pred)[1,0]
fp = confusion_matrix(y_test,y_pred)[0,1]
# Compute sensitivity and specificity
sensitivity = tp / (tp + fn)
specificity = tn / (tn + fp)
print(f'Sensitivity is {sensitivity*100}% and specificity is {specificity*100}%')
plot_confusion_matrix(y_true=y_test, y_pred=y_pred, class_names=['Healthy events','Injury events'])
```
##2) XGBoost Model with SMOTE combined with Random Undersampling
```
'''XGBoost Classifier and SMOTE (Synthetic Minority Oversampling Technique) combined with Random Undersampling'''
# Check the number of instances for each class before and after resampling
print(Counter(y_train))
print(Counter(y_smus))
xgb_sm = XGBClassifier()
param_grid_sm = {"gamma":[0.01,0.1,1,10,50,100,1000],'learning_rate':np.arange(0.01,1,0.01),'eta':np.arange(0.1,1,0.1),'reg_lambda':[1,5,10,20]}
gscv_sm = RandomizedSearchCV(xgb_sm,param_distributions=param_grid_sm,cv=5,scoring='roc_auc')
gscv_sm.fit(X_smus,y_smus)
print("Best param is {}".format(gscv_sm.best_params_))
print("Best score is {}".format(gscv_sm.best_score_))
optimal_gamma = gscv_sm.best_params_['gamma']
optim_lr = gscv_sm.best_params_['learning_rate']
optimal_eta = gscv_sm.best_params_['eta']
optimal_lambda = gscv_sm.best_params_['reg_lambda']
tuned_xgb_sm = XGBClassifier(gamma=optimal_gamma,learning_rate=optim_lr,eta=optimal_eta,reg_lambda=optimal_lambda,subsample=0.4,
colsample_bytree=0.6,min_child_weight=90,objective='binary:logistic')
tuned_xgb_sm.fit(X_smus,y_smus,early_stopping_rounds=10,eval_metric='auc',eval_set=[(X_test,y_test)])
# Evaluate model's performance on the test set, with AUC, confusion matrix, sensitivity and specificity
y_pred = tuned_xgb_sm.predict(X_test)
print(f'Area under curve score is {roc_auc_score(y_test,tuned_xgb_sm.predict_proba(X_test)[:,1])}')
# Compute true positives, true neagatives, false negatives and false positives
tp = confusion_matrix(y_test,y_pred)[1,1]
tn = confusion_matrix(y_test,y_pred)[0,0]
fn = confusion_matrix(y_test,y_pred)[1,0]
fp = confusion_matrix(y_test,y_pred)[0,1]
# Compute sensitivity and specificity
sensitivity = tp / (tp + fn)
specificity = tn / (tn + fp)
print(f'Sensitivity is {sensitivity*100}% and specificity is {specificity*100}%')
plot_confusion_matrix(y_true=y_test, y_pred=y_pred, class_names=['Healthy events','Injury events'])
```
## 3) XGBoost Model with Random Resampling
```
'''XGBoost Classifier and Random Undersampling'''
# Check the number of instances for each class before and after resampling
print(Counter(y_train))
print(Counter(y_rosus))
xgb_rus = XGBClassifier()
param_grid_rus = {"gamma":[0.01,0.1,1,10,50,100,1000],'reg_lambda':[1,5,10,20],'learning_rate':np.arange(0.01,1,0.01),'eta':np.arange(0.1,1,0.1)}
gscv_rus = RandomizedSearchCV(xgb_rus,param_distributions=param_grid_rus,cv=5,scoring='roc_auc')
gscv_rus.fit(X_rosus,y_rosus)
print("Best param is {}".format(gscv_rus.best_params_))
print("Best score is {}".format(gscv_rus.best_score_))
optimal_gamma = gscv_rus.best_params_['gamma']
optimal_reg_lambda = gscv_rus.best_params_['reg_lambda']
optim_lr = gscv_rus.best_params_['learning_rate']
optimal_eta = gscv_rus.best_params_['eta']
tuned_xgb_rus = XGBClassifier(gamma=optimal_gamma,reg_lambda=optimal_reg_lambda,learning_rate=optim_lr,eta=optimal_eta,
colsample_bytree=0.7,min_child_weight=9,objective='binary:logistic',subsample=0.8)
tuned_xgb_rus.fit(X_rosus,y_rosus,early_stopping_rounds=10,eval_metric='auc',eval_set=[(X_test,y_test)])
# Evaluate model's performance on the test set, with AUC, confusion matrix, sensitivity and specificity
y_pred = tuned_xgb_rus.predict(X_test)
print(f'Area under curve score is {roc_auc_score(y_test,tuned_xgb_rus.predict_proba(X_test)[:,1])}')
# Compute true positives, true neagatives, false negatives and false positives
tp = confusion_matrix(y_test,y_pred)[1,1]
tn = confusion_matrix(y_test,y_pred)[0,0]
fn = confusion_matrix(y_test,y_pred)[1,0]
fp = confusion_matrix(y_test,y_pred)[0,1]
# Compute sensitivity and specificity
sensitivity = tp / (tp + fn)
specificity = tn / (tn + fp)
print(f'Sensitivity is {sensitivity*100}% and specificity is {specificity*100}%')
plot_confusion_matrix(y_true=y_test, y_pred=y_pred, class_names=['Healthy events','Injury events'])
```
## 4) Bagging Model with XGBoost base estimators and Random Resampling
```
'''Bagging Classifier with XGBoost base estimators and Random Undersampling with combined Oversampling'''
# Check the number of instances for each class before and after resampling
print(Counter(y_train))
print(Counter(y_rosus))
base_est = XGBClassifier(gamma=optimal_gamma,reg_lambda=optimal_reg_lambda,learning_rate=optim_lr,eta=optimal_eta,
colsample_bytree=0.6,min_child_weight=90,objective='binary:logistic',subsample=0.8,n_estimators=11)
# XGBoost base classifier
#base_est = XGBClassifier(n_estimators=512,learning_rate=0.01,max_depth=3)
# Bagging XGBoost Classifier
bagg = BaggingClassifier(base_estimator=base_est,n_estimators=9,max_samples=2048,random_state=21)
# Platt's Scaling to get probabilities outputs
calib_clf = CalibratedClassifierCV(bagg,cv=5)
# Evaluate model's performance on the test set, with AUC, confusion matrix, sensitivity and specificity
# You can switch threshold prob in order to bias sensitivity at the cost of specificity. It is set to default 0.5
calib_clf.fit(X_rosus,y_rosus)
y_pred_calib = calib_clf.predict_proba(X_test)
threshold_prob = 0.5
y_pred = []
for y_hat in y_pred_calib:
if y_hat[1] > threshold_prob:
y_pred.append(1)
else:
y_pred.append(0)
print(f'Area under curve score is {roc_auc_score(y_test,calib_clf.predict_proba(X_test)[:,1])}')
# Compute true positives, true neagatives, false negatives and false positives
tp = confusion_matrix(y_test,np.array(y_pred))[1,1]
tn = confusion_matrix(y_test,np.array(y_pred))[0,0]
fn = confusion_matrix(y_test,np.array(y_pred))[1,0]
fp = confusion_matrix(y_test,np.array(y_pred))[0,1]
# Compute sensitivity and specificity
sensitivity = tp / (tp + fn)
specificity = tn / (tn + fp)
print(f'Sensitivity is {sensitivity*100}% and specificity is {specificity*100}%')
# Plot confusion matrix
plot_confusion_matrix(y_true=y_test, y_pred=np.array(y_pred), class_names=['Healthy events','Injury events'])
```
## 5) Neural Networks Model
```
'''Neural Networks Model'''
# Check the number of instances for each class before and after resampling
print(Counter(y_train))
print(Counter(y_rus))
# Scale X data
X_scaled_rus = MinMaxScaler().fit_transform(X_rus)
X_scaled_test = MinMaxScaler().fit_transform(X_test)
# set random seed for reproducibility
tf.random.set_seed(24)
# create model with 9 hidden layers with 50 neurons each and 1 output layer
nn_model = tf.keras.Sequential([tf.keras.layers.Dense(128,activation="relu"),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(128,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(128,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(128,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(32,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(32,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(1,activation="sigmoid")
])
# compile model
nn_model.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Adam(learning_rate=0.002),
metrics=['AUC'])
# set callback to stop after 10 epochs if model doesn't improve and fit training data
callback = tf.keras.callbacks.EarlyStopping(monitor='loss',patience=3)
history = nn_model.fit(X_scaled_rus,y_rus,epochs=10,batch_size=32,callbacks=[callback])
# Evaluate model performance on test set, with AUC, confusion matrix, sensitivity and specificity
y_prob_pred = nn_model.predict(X_scaled_test)
y_pred = []
for i in y_prob_pred:
if i <=0.5:
y_pred.append(0)
else:
y_pred.append(1)
y_pred = np.array(y_pred)
print(y_pred[y_pred>1])
# Compute true positives, true neagatives, false negatives and false positives
tp = confusion_matrix(y_test,np.array(y_pred))[1,1]
tn = confusion_matrix(y_test,np.array(y_pred))[0,0]
fn = confusion_matrix(y_test,np.array(y_pred))[1,0]
fp = confusion_matrix(y_test,np.array(y_pred))[0,1]
# Compute sensitivity and specificity
sensitivity = tp / (tp + fn)
specificity = tn / (tn + fp)
print(f'Sensitivity is {sensitivity*100}% and specificity is {specificity*100}%')
# Plot confusion matrix
plot_confusion_matrix(y_true=y_test, y_pred=np.array(y_pred), class_names=['Healthy events','Injury events'])
# evaluate the model
print(f'Area Under Curve is {nn_model.evaluate(X_scaled_test,y_test)[1]}')
'''Find optimal Learning Rate for nn_model'''
# set random seed for reproducibility
tf.random.set_seed(24)
# create model with 2 hidden layers and 1 output layer
nn_model = tf.keras.Sequential([tf.keras.layers.Dense(128,activation="relu"),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(128,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(128,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(128,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(32,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(32,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(1,activation="sigmoid")
])
# compile model
nn_model.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Adam(),
metrics=["AUC"])
# set callback to stop after 5 epochs if model doesn't improve and fit training data
lr_scheduler = tf.keras.callbacks.LearningRateScheduler(lambda epoch: 1e-4 * 10 ** (epoch/20))
history = nn_model.fit(X_scaled_rus,y_rus,epochs=30,callbacks=[lr_scheduler])
# plot accuracy vs learning rate to find optimal learning rate
plt.figure(figsize=[10,10])
plt.semilogx(1e-4 * (10 ** (tf.range(30)/20)),history.history["loss"])
plt.ylabel("Loss")
plt.title("Learning Rate vs Loss")
plt.show()
'''Crossvalidation on nn_model'''
from keras.wrappers.scikit_learn import KerasClassifier
tf.random.set_seed(24)
def create_nn_model():
# create model with 2 hidden layers and 1 output layer
nn_model = tf.keras.Sequential([tf.keras.layers.Dense(128,activation="relu"),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(128,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(128,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(128,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(32,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(32,activation="relu"),
#tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(1,activation="sigmoid")
])
# compile model
nn_model.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Adam(learning_rate=0.002),
metrics=["AUC"])
return nn_model
neural_network = KerasClassifier(build_fn=create_nn_model,
epochs=10)
# Evaluate neural network using 5-fold cross-validation
cv = StratifiedKFold(n_splits=5,shuffle=True,random_state=1)
cross_val_score(neural_network, X_scaled_rus, y_rus, scoring='roc_auc', cv=cv)
```
|
github_jupyter
|
```
import pandas as pd
import numpy as np
from datetime import datetime
import os
```
# Define Which Input Files to Use
The default settings will use the input files recently produced in Step 1) using the notebook `get_eia_demand_data.ipynb`. For those interested in reproducing the exact results included in the repository, you will need to point to the files containing the original `raw` EIA demand data that we querried on 10 Sept 2019.
```
merge_with_step1_files = False # used to run step 2 on the most recent files
merge_with_10sept2019_files = True # used to reproduce the documented results
assert((merge_with_step1_files != merge_with_10sept2019_files) and
(merge_with_step1_files == True or merge_with_10sept2019_files == True)), "One of these must be true: 'merge_with_step1_files' and 'merge_with_10sept2019_files'"
if merge_with_step1_files:
input_path = './data'
if merge_with_10sept2019_files:
# input_path is the path to the downloaded data from Zenodo: https://zenodo.org/record/3517197
input_path = '/BASE/PATH/TO/ZENODO'
input_path += '/data/release_2019_Oct/original_eia_files'
assert(os.path.exists(input_path)), f"You must set the base directory for the Zenodo data {input_path} does not exist"
# If you did not run step 1, make the /data directory
if not os.path.exists('./data'):
os.mkdir('./data')
```
# Make the output directories
```
# Make output directories
out_base = './data/final_results'
if not os.path.exists(out_base):
os.mkdir(out_base)
for subdir in ['balancing_authorities', 'regions', 'interconnects', 'contiguous_US']:
os.mkdir(f"{out_base}/{subdir}")
print(f"Final results files will be located here: {out_base}/{subdir}")
```
# Useful functions
```
# All 56 balancing authorities that have demand (BA)
def return_all_regions():
return [
'AEC', 'AECI', 'CPLE', 'CPLW',
'DUK', 'FMPP', 'FPC',
'FPL', 'GVL', 'HST', 'ISNE',
'JEA', 'LGEE', 'MISO', 'NSB',
'NYIS', 'PJM', 'SC',
'SCEG', 'SOCO',
'SPA', 'SWPP', 'TAL', 'TEC',
'TVA', 'ERCO',
'AVA', 'AZPS', 'BANC', 'BPAT',
'CHPD', 'CISO', 'DOPD',
'EPE', 'GCPD', 'IID',
'IPCO', 'LDWP', 'NEVP', 'NWMT',
'PACE', 'PACW', 'PGE', 'PNM',
'PSCO', 'PSEI', 'SCL', 'SRP',
'TEPC', 'TIDC', 'TPWR', 'WACM',
'WALC', 'WAUW',
'OVEC', 'SEC',
]
# All 54 "usable" balancing authorities (BA) (excludes OVEC and SEC)
# These 2 have significant
# enough reporting problems that we do not impute cleaned data for them.
def return_usable_BAs():
return [
'AEC', 'AECI', 'CPLE', 'CPLW',
'DUK', 'FMPP', 'FPC',
'FPL', 'GVL', 'HST', 'ISNE',
'JEA', 'LGEE', 'MISO', 'NSB',
'NYIS', 'PJM', 'SC',
'SCEG', 'SOCO',
'SPA', 'SWPP', 'TAL', 'TEC',
'TVA', 'ERCO',
'AVA', 'AZPS', 'BANC', 'BPAT',
'CHPD', 'CISO', 'DOPD',
'EPE', 'GCPD', 'IID',
'IPCO', 'LDWP', 'NEVP', 'NWMT',
'PACE', 'PACW', 'PGE', 'PNM',
'PSCO', 'PSEI', 'SCL', 'SRP',
'TEPC', 'TIDC', 'TPWR', 'WACM',
'WALC', 'WAUW',
# 'OVEC', 'SEC',
]
# mapping of each balancing authority (BA) to its associated
# U.S. interconnect (IC).
def return_ICs_from_BAs():
return {
'EASTERN_IC' : [
'AEC', 'AECI', 'CPLE', 'CPLW',
'DUK', 'FMPP', 'FPC',
'FPL', 'GVL', 'HST', 'ISNE',
'JEA', 'LGEE', 'MISO', 'NSB',
'NYIS', 'PJM', 'SC',
'SCEG', 'SOCO',
'SPA', 'SWPP', 'TAL', 'TEC',
'TVA',
'OVEC', 'SEC',
],
'TEXAS_IC' : [
'ERCO',
],
'WESTERN_IC' : [
'AVA', 'AZPS', 'BANC', 'BPAT',
'CHPD', 'CISO', 'DOPD',
'EPE', 'GCPD',
'IID',
'IPCO', 'LDWP', 'NEVP', 'NWMT',
'PACE', 'PACW', 'PGE', 'PNM',
'PSCO', 'PSEI', 'SCL', 'SRP',
'TEPC', 'TIDC', 'TPWR', 'WACM',
'WALC', 'WAUW',
]
}
# Defines a mapping between the balancing authorities (BAs)
# and their locally defined region based on EIA naming.
# This uses a json file defining the mapping.
def return_BAs_per_region_map():
regions = {
'CENT' : 'Central',
'MIDW' : 'Midwest',
'TEN' : 'Tennessee',
'SE' : 'Southeast',
'FLA' : 'Florida',
'CAR' : 'Carolinas',
'MIDA' : 'Mid-Atlantic',
'NY' : 'New York',
'NE' : 'New England',
'TEX' : 'Texas',
'CAL' : 'California',
'NW' : 'Northwest',
'SW' : 'Southwest'
}
rtn_map = {}
for k, v in regions.items():
rtn_map[k] = []
# Load EIA's Blancing Authority Acronym table
# https://www.eia.gov/realtime_grid/
df = pd.read_csv('data/balancing_authority_acronyms.csv',
skiprows=1) # skip first row as it is source info
# Loop over all rows and fill map
for idx in df.index:
# Skip Canada and Mexico
if df.loc[idx, 'Region'] in ['Canada', 'Mexico']:
continue
reg_acronym = ''
# Get region to acronym
for k, v in regions.items():
if v == df.loc[idx, 'Region']:
reg_acronym = k
break
assert(reg_acronym != '')
rtn_map[reg_acronym].append(df.loc[idx, 'Code'])
tot = 0
for k, v in rtn_map.items():
tot += len(v)
print(f"Total US48 BAs mapped {tot}. Recall 11 are generation only.")
return rtn_map
# Assume the MICE results file is a subset of the original hours
def trim_rows_to_match_length(mice, df):
mice_start = mice.loc[0, 'date_time']
mice_end = mice.loc[len(mice.index)-1, 'date_time']
to_drop = []
for idx in df.index:
if df.loc[idx, 'date_time'] != mice_start:
to_drop.append(idx)
else: # stop once equal
break
for idx in reversed(df.index):
if df.loc[idx, 'date_time'] != mice_end:
to_drop.append(idx)
else: # stop once equal
break
df = df.drop(to_drop, axis=0)
df = df.reset_index()
assert(len(mice.index) == len(df.index))
return df
# Load balancing authority files already containing the full MICE results.
# Aggregate associated regions into regional, interconnect, or CONUS files.
# Treat 'MISSING' and 'EMPTY' values as zeros when aggregating.
def merge_BAs(region, bas, out_base, folder):
print(region, bas)
# Remove BAs which are generation only as well as SEC and OVEC.
# See main README regarding SEC and OVEC.
usable_BAs = return_usable_BAs()
good_bas = []
for ba in bas:
if ba in usable_BAs:
good_bas.append(ba)
first_ba = good_bas.pop()
master = pd.read_csv(f'{out_base}/balancing_authorities/{first_ba}.csv', na_values=['MISSING', 'EMPTY'])
master = master.fillna(0)
master = master.drop(['category', 'forecast demand (MW)'], axis=1)
for ba in good_bas:
df = pd.read_csv(f'{out_base}/balancing_authorities/{ba}.csv', na_values=['MISSING', 'EMPTY'])
df = df.fillna(0)
master['raw demand (MW)'] += df['raw demand (MW)']
master['cleaned demand (MW)'] += df['cleaned demand (MW)']
master.to_csv(f'{out_base}/{folder}/{region}.csv', index=False)
# Do both the distribution of balancing authority level results to new BA files
# and generate regional, interconnect, and CONUS aggregate files.
def distribute_MICE_results(raw_demand_file_loc, screening_file, mice_results_csv, out_base):
# Load screening results
screening = pd.read_csv(screening_file)
# Load MICE results
mice = pd.read_csv(mice_results_csv)
screening = trim_rows_to_match_length(mice, screening)
# Distribute to single BA results files first
print("Distribute MICE results per-balancing authority:")
for ba in return_usable_BAs():
print(ba)
df = pd.read_csv(f"{raw_demand_file_loc}/{ba}.csv")
df = trim_rows_to_match_length(mice, df)
df_out = pd.DataFrame({
'date_time': df['date_time'],
'raw demand (MW)': df['demand (MW)'],
'category': screening[f'{ba}_category'],
'cleaned demand (MW)': mice[ba],
'forecast demand (MW)': df['forecast demand (MW)']
})
df_out.to_csv(f'./{out_base}/balancing_authorities/{ba}.csv', index=False)
# Aggregate balancing authority level results into EIA regions
print("\nEIA regional aggregation:")
for region, bas in return_BAs_per_region_map().items():
merge_BAs(region, bas, out_base, 'regions')
# Aggregate balancing authority level results into CONUS interconnects
print("\nCONUS interconnect aggregation:")
for region, bas in return_ICs_from_BAs().items():
merge_BAs(region, bas, out_base, 'interconnects')
# Aggregate balancing authority level results into CONUS total
print("\nCONUS total aggregation:")
merge_BAs('CONUS', return_usable_BAs(), out_base, 'contiguous_US')
```
# Run the distribution and aggregation
```
# The output file generated by Step 2 listing the categories for each time step
screening_file = './data/csv_MASTER.csv'
# The output file generated by Step 3 which runs the MICE algo and has the cleaned demand values
mice_file = 'MICE_output/mean_impute_csv_MASTER.csv'
distribute_MICE_results(input_path, screening_file, mice_file, out_base)
```
# Test distribution and aggregation
This cell simply checks that the results all add up.
```
# Compare each value in the vectors
def compare(vect1, vect2):
cnt = 0
clean = True
for v1, v2 in zip(vect1, vect2):
if v1 != v2:
print(f"Error at idx {cnt} {v1} != {v2}")
clean = False
cnt += 1
return clean
def test_aggregation(raw_demand_file_loc, screening_file, mice_results_csv, out_base):
# Load MICE results
usable_BAs = return_usable_BAs()
mice = pd.read_csv(mice_results_csv)
# Sum all result BAs
tot_imp = np.zeros(len(mice.index))
for col in mice.columns:
if col not in usable_BAs:
continue
tot_imp += mice[col]
# Sum Raw
tot_raw = np.zeros(len(mice.index))
for ba in return_usable_BAs():
df = pd.read_csv(f"{raw_demand_file_loc}/{ba}.csv", na_values=['MISSING', 'EMPTY'])
df = trim_rows_to_match_length(mice, df)
df = df.fillna(0)
tot_raw += df['demand (MW)']
# Check BA results distribution
print("\nBA Distribution:")
new_tot_raw = np.zeros(len(mice.index))
new_tot_clean = np.zeros(len(mice.index))
for ba in return_usable_BAs():
df = pd.read_csv(f"{out_base}/balancing_authorities/{ba}.csv", na_values=['MISSING', 'EMPTY'])
df = df.fillna(0)
new_tot_raw += df['raw demand (MW)']
new_tot_clean += df['cleaned demand (MW)']
assert(compare(tot_raw, new_tot_raw)), "Error in raw sums."
assert(compare(tot_imp, new_tot_clean)), "Error in imputed values."
print("BA Distribution okay!")
# Check aggregate balancing authority level results into EIA regions
print("\nEIA regional aggregation:")
new_tot_raw = np.zeros(len(mice.index))
new_tot_clean = np.zeros(len(mice.index))
for region, bas in return_BAs_per_region_map().items():
df = pd.read_csv(f"{out_base}/regions/{region}.csv")
new_tot_raw += df['raw demand (MW)']
new_tot_clean += df['cleaned demand (MW)']
assert(compare(tot_raw, new_tot_raw)), "Error in raw sums."
assert(compare(tot_imp, new_tot_clean)), "Error in imputed values."
print("Regional sums okay!")
# Aggregate balancing authority level results into CONUS interconnects
print("\nCONUS interconnect aggregation:")
new_tot_raw = np.zeros(len(mice.index))
new_tot_clean = np.zeros(len(mice.index))
for region, bas in return_ICs_from_BAs().items():
df = pd.read_csv(f"{out_base}/interconnects/{region}.csv")
new_tot_raw += df['raw demand (MW)']
new_tot_clean += df['cleaned demand (MW)']
assert(compare(tot_raw, new_tot_raw)), "Error in raw sums."
assert(compare(tot_imp, new_tot_clean)), "Error in imputed values."
print("Interconnect sums okay!")
# Aggregate balancing authority level results into CONUS total
print("\nCONUS total aggregation:")
new_tot_raw = np.zeros(len(mice.index))
new_tot_clean = np.zeros(len(mice.index))
df = pd.read_csv(f"{out_base}/contiguous_US/CONUS.csv")
new_tot_raw += df['raw demand (MW)']
new_tot_clean += df['cleaned demand (MW)']
assert(compare(tot_raw, new_tot_raw)), "Error in raw sums."
assert(compare(tot_imp, new_tot_clean)), "Error in imputed values."
print("CONUS sums okay!")
test_aggregation(input_path, screening_file, mice_file, out_base)
```
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.