path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Fibonacci numbers.ipynb | ###Markdown
Create the Fibonacci numbers series
###Code
def fibonacci(N):
a = 0
b = 1
print(a)
print(b)
if (N == 1):
print(a)
else:
for i in range(2,N+1):
c = a+b
a = b
b = c
print(c)
N = int(input("Enter your number:"))
fibonacci(N)
###Output
Enter your number:5
0
1
1
2
3
5
###Markdown
Given a number N find the sum of all the even valued terms in the fibonacci sequence less than or equal to N. Try generating only even fibonacci numbers instead of iterating over all Fibonacci numbers.
###Code
count = 0
def fibonacci(N):
a = 0
b = 1
# print(a)
# print(b)
if (N == 1):
print(a)
else:
for i in range(2,N+1):
c = a+b
a = b
b = c
# print(c)
global count
if (c%2 == 0):
if c<= N:
count += c
else:
break
#
return count
N = int(input())
fibonacci(N)
print(count)
###Output
200
Output : 188
###Markdown
Program to display first n Fibonacci numbers
###Code
# function to display first n fibonacci numbers
def print_n_fibonacci(n):
# handling the edge cases
if n <= 0:
return
if n == 1:
print('0')
return
if n == 2:
print('0, 1')
return
# first two numbers
firstNum = 0
secondNum = 1
n -=2
# displaying the first two numbers
print(0, 1, end=", ")
for num in range(n):
# finding the next number
sumOfPreviousTwo = firstNum + secondNum
# print the number
if num == n-1:
print(sumOfPreviousTwo)
else:
print(sumOfPreviousTwo, end=", ")
# updating the previous two numbers
firstNum = secondNum
secondNum = sumOfPreviousTwo
print('First 2 fibonacci numbers: ')
print_n_fibonacci(2)
print('\nFirst 5 fibonacci numbers: ')
print_n_fibonacci(5)
print('\nFirst 50 fibonacci numbers: ')
print_n_fibonacci(50)
###Output
First 2 fibonacci numbers:
0, 1
First 5 fibonacci numbers:
0 1, 1, 2, 3
First 50 fibonacci numbers:
0 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986, 102334155, 165580141, 267914296, 433494437, 701408733, 1134903170, 1836311903, 2971215073, 4807526976, 7778742049
|
lecture08-xarray/lecture08.ipynb | ###Markdown
xarray
###Code
COUNTRIES = 'Austria', 'Germany', 'Switzerland', 'Italy', 'Spain', 'Sweden', 'United Kingdom'
###Output
_____no_output_____
###Markdown
From previous lecture...
###Code
CONFIG_FILE = '../entsoe-data.config'
if not os.path.exists(CONFIG_FILE):
download_dir = input('Path to ENTSO-E data folder: ')
if not os.path.isdir(download_dir):
raise RuntimeError(f'Invalid download_dir, please run cell again: {download_dir}')
with open(CONFIG_FILE, 'w') as f:
f.write(download_dir)
else:
with open(CONFIG_FILE) as f:
download_dir = f.read()
# Clear the output after this cell if you want to aovid having your path in the notebook (or execute it twice)!
def read_single_csv_entso_e(file):
return pd.read_csv(file, sep='\t', encoding='utf-16', parse_dates=["DateTime"])
def load_complete_entso_e_data(directory):
pattern = Path(directory) / '*.csv'
files = glob.glob(str(pattern))
if not files:
raise ValueError(f"No files found when searching in {pattern}, wrong directory?")
print(f'Concatenating {len(files)} csv files...')
each_csv_file = [read_single_csv_entso_e(file) for file in files]
print("Files read, concatenating to dataframe...")
data = pd.concat(each_csv_file, ignore_index=True)
print("Sorting and indexing...")
data = data.set_index("DateTime")
data.sort_index(inplace=True)
# filter only for countries under consideration to make things faster and consume less RAM...
data_ = data[data.AreaName.isin(COUNTRIES)].copy()
del data
data = data_
print("Resampling...")
data = data.groupby('AreaName').resample("1h").mean()
# we should end up with a dataframe with DateTime as index, AreaName as columns
# and Total load as entries...
print("Reshaping dataframe...")
data = data.TotalLoadValue.unstack(level=0).interpolate()
print("Loading done.")
return data
# note: this might require 3GB of RAM
power_demand = load_complete_entso_e_data(download_dir)
###Output
Concatenating 69 csv files...
Files read, concatenating to dataframe...
Sorting and indexing...
Resampling...
Reshaping dataframe...
Loading done.
###Markdown
Erratum: there was a mistakes last timeRandom split cannot be used on time series to determine quality of fit, in particular overfitting. Source: Chabacano CC-BY-SA 4.0
###Code
def f(x):
return np.cos(x * 2 * np.pi)
X = np.linspace(0, 3, num=100)[:, np.newaxis]
Y = f(X)[:, 0]
plt.plot(X[:, 0], Y, 'o-')
forest = ensemble.RandomForestRegressor()
forest.fit(X, Y)
forest.score(X, Y)
###Output
_____no_output_____
###Markdown
Ok, we got a good score on our training data! Let's generate some new (unseen) samples for `X` and use them as test data!
###Code
X_test_inbetween = np.linspace(1, 3, num=20)[:, np.newaxis]
X_test_after = np.linspace(3, 5, num=20)[:, np.newaxis]
plt.plot(X[:, 0], Y, 'o-', label='Training data')
plt.plot(X_test_inbetween[:, 0],
forest.predict(X_test_inbetween),
'o-', label='Test data (in between)')
plt.plot(X_test_after[:, 0],
forest.predict(X_test_after),
'o-', label='Test data (after)')
plt.legend()
###Output
_____no_output_____
###Markdown
Both tests sets contain only unseen values, but the performance is way worse on the `X_test_after`. The forest learned only to calculate `f()` between 0 and 3 and can't predict values above 3. **Mistake from last time:**In our case, splitting data randomly into test/training was a very bad choice, because we measured the score on `X_in_between` (random samples between 2015 and 2019) but we are probably interested in a score on `X_after` (training 2015-2018, test 2019). Let's now train a different forest - this time a bit more similar to what we did last week. We will assume
###Code
# just train on fraction of the period, i.e. just use decimals after the comma
X_fraction = np.modf(X)[0]
forest_periodic = ensemble.RandomForestRegressor()
forest_periodic.fit(X_fraction, Y)
x = np.linspace(1, 5, num=200)
X_fraction_test = np.modf(x)[0][:, np.newaxis]
plt.plot(X[:, 0], Y, 'o-', label='Training data')
plt.plot(x, forest_periodic.predict(X_fraction_test), 'o-', label='Test data')
plt.legend()
###Output
_____no_output_____
###Markdown
If there is noise or a trend in the data, it doesn't work that well, but still good enough.That means our forest wasn't performing too bad, but way worse than we thought it did. The easy way: plot relative power demand by aggregating weekly
###Code
power_demand_normal = power_demand['2015-01-01':'2019-12-31']
power_demand_covid = power_demand['2020-01-01':'2020-12-31']
power_demand_covid.Austria.plot()
power_demand_normal_weekly = power_demand_normal.groupby(power_demand_normal.index.week).mean()[1:-1]
power_demand_covid_weekly = power_demand_covid.groupby(power_demand_covid.index.week).mean()[1:-1]
(power_demand_covid_weekly / power_demand_normal_weekly).plot()
plt.xlabel('Week of the year');
(power_demand_covid_weekly.Austria / power_demand_normal_weekly.Austria).plot()
plt.xlabel('Week of the year');
###Output
_____no_output_____
###Markdown
Temperature dataERA5 data is provided as NetCDF file. The library `xarray` comes in very handy to load such files.
###Code
import xarray as xr
temperatures_dataset = xr.load_dataset('../data/temperatures_era5.nc')
temperatures_dataset
temperatures = temperatures_dataset.t2m
temperatures
###Output
_____no_output_____
###Markdown
Oh there are NaN values? How many of them?
###Code
total_size = temperatures.sizes['time'] * temperatures.sizes['latitude'] * temperatures.sizes['longitude']
float(np.isnan(temperatures).sum() / total_size)
###Output
_____no_output_____
###Markdown
Uh 55% of missing values.. That's not good! What could that be?
###Code
(~np.isnan(temperatures)).prod(dim='time').plot.imshow(cmap='gray')
###Output
_____no_output_____
###Markdown
**Note:** We downloaded the product `'reanalysis-era5-land'`, there is also `'era5-single-levels'` which contains data also for locations in the sea. Exercise 1Plot the mean temperature for each location!(There will be a warning because of the NaNs, but that's okay.) Temperature seems not to be in °C...
###Code
temperatures = temperatures - 273.15
temperatures.name = 'Temperature [C°]'
temperatures.mean(dim='time').plot.imshow()
###Output
_____no_output_____
###Markdown
Pick random grid points to calculate the meanAs a next step, we want to calculate the mean temperature for each country. We'll pick just some random samples from the grid for each country, to make computation of the man faster. The coordinates are already prepared as CSV file, which has been generated using the following code.
###Code
def choose_country_points(longitude, latitude, grid_points_per_country=200):
"""Pick random points for each country from the grid with axis ``longitude`` and ``latitude``.
``size`` is the number of points ot be picked for
Returns a dataframe with two columns per country (longitude & latitude)
and ``grid_points_per_country`` numbers of rows.
Note: GeoJSON always uses WGS84:
https://tools.ietf.org/html/rfc7946
"""
# local import to avoid dependency
import geopandas
from shapely.geometry import Point
longitudes, latitudes = np.meshgrid(longitude, latitude)
longitudes = longitudes.flatten()
latitudes = latitudes.flatten()
grid_points = geopandas.GeoSeries(geopandas.points_from_xy(longitudes.flatten(),
latitudes.flatten()))
# XXX fix me, correct path!
country_borders = geopandas.read_file('../data/countries.geojson')
chosen_gridpoints = pd.DataFrame()
for country in COUNTRIES:
print(f"Picking grid points for {country}...")
is_country = country_borders.ADMIN == country
country_border = country_borders[is_country].geometry.iloc[0]
is_in_country = grid_points.within(country_border)
number_of_points = is_in_country.sum()
# make things reproducible!
np.random.seed(42)
idcs = np.random.randint(number_of_points, size=grid_points_per_country)
chosen_gridpoints[f'{country}_longitude'] = longitudes[is_in_country][idcs]
chosen_gridpoints[f'{country}_latitude'] = latitudes[is_in_country][idcs]
return chosen_gridpoints
###Output
_____no_output_____
###Markdown
In order to recreate the `country_points.csv` one needs to install `geopandas` and download a `GeoJSON` file (23MB) which contains the country borders. On windows there might be no `wget` command, use `requests.get()` instead to download the file:
###Code
# !conda install --yes geopandas
# !wget -O ../data/countries.geojson https://raw.githubusercontent.com/datasets/geo-countries/master/data/countries.geojson
###Output
_____no_output_____
###Markdown
The following lines create the `country_points.csv`:
###Code
# country_points = choose_country_points(temperatures.longitude, temperatures.latitude)
# country_points.to_csv('../data/country_points.csv', index=False)
###Output
_____no_output_____
###Markdown
But since it is already prepared, let's just load it...
###Code
country_points = pd.read_csv('../data/country_points.csv')
country_points
###Output
_____no_output_____
###Markdown
Let's plote some of these points:
###Code
plt.plot(country_points['Austria_longitude'], country_points['Austria_latitude'], 'o')
plt.xlabel('Longitude [deg]')
plt.ylabel('Latitude [deg]');
plt.plot(country_points['Germany_longitude'], country_points['Germany_latitude'], 'o')
plt.xlabel('Longitude [deg]')
plt.ylabel('Latitude [deg]');
###Output
_____no_output_____
###Markdown
Calculate mean temperature for each country
###Code
country = 'Austria'
country_temperature = temperatures.sel(
longitude=xr.DataArray(country_points['Austria_longitude'], dims='points'),
latitude=xr.DataArray(country_points['Austria_latitude'], dims='points'))
country_temperature
def calc_country_temperature(country):
country_temperature = temperatures.sel(
longitude=xr.DataArray(country_points[f'{country}_longitude'], dims='points'),
latitude=xr.DataArray(country_points[f'{country}_latitude'], dims='points')).mean(dim='points')
return country_temperature
temperature_at = calc_country_temperature('Austria')
temperature_at.plot()
###Output
_____no_output_____
###Markdown
Who likes to have it warm?
###Code
plt.plot(temperature_at.interp(time=power_demand.Austria.index),
power_demand.Austria, 'o')
plt.xlabel('Temperature [°C]')
plt.ylabel('Load [MW]');
idcs = (power_demand.Austria.index.weekday == 2) & (power_demand.Austria.index.hour == 9)
idcs
plt.plot(temperature_at.interp(time=power_demand.Austria.index[idcs]),
power_demand.Austria[idcs], 'o')
plt.ylim(6_000, 11_000)
plt.xlabel('Temperature [°C]')
plt.ylabel('Load [MW]')
plt.title("Load vs Temperature (Wednesdays 9:00am)");
from scipy.ndimage import median_filter
power_temperature = pd.DataFrame()
power_temperature['TotalLoadValue'] = power_demand.Austria[idcs]
power_temperature['Temperature'] = temperature_at.interp(time=power_demand.Austria.index[idcs])
power_temperature = power_temperature.sort_values('Temperature')
#plt.plot(power_temperature.Temperature,
# power_temperature.TotalLoadValue, '-')
plt.plot(power_temperature.Temperature,
median_filter(power_temperature.TotalLoadValue,
mode='nearest',
size=30),
'-')
plt.ylim(6_000, 11_000)
plt.xlabel('Temperature [°C]')
plt.ylabel('Load [MW]')
plt.title("Load vs Temperature (Wednesdays 9:00am)");
###Output
_____no_output_____
###Markdown
A `median_filter()` will replace each value by the median of it's surroundings of size `size`:
###Code
median_filter(np.array([1., 1., 1., 1., 5., 1., 1.]), size=3)
median_filter(np.array([1., 1., 1., 1., 5., 5., 1.]), size=3)
for country in COUNTRIES:
power_demand_country = power_demand[country]
country_temperature = calc_country_temperature(country)
# select observations from Wednesdays 9:00am
idcs = (power_demand_country.index.weekday == 2) & (power_demand_country.index.hour == 9)
power_temperature = pd.DataFrame()
power_temperature['TotalLoadValue'] = power_demand_country[idcs]
power_temperature['Temperature'] = country_temperature.interp(time=power_demand_country.index[idcs])
power_temperature = power_temperature.sort_values('Temperature')
normalized_load = power_temperature.TotalLoadValue / power_temperature.TotalLoadValue.mean()
normalized_load_filtered = median_filter(normalized_load, mode='nearest', size=30)
lines, = plt.plot(power_temperature.Temperature, normalized_load_filtered, '-', label=country)
#if country == 'United Kingdom':
# plt.plot(power_temperature.Temperature, normalized_load, 'o-',
# linewidth=0.5, markersize=2, alpha=0.4,
# color=lines.get_color(),
# label=f"{country} (unfiltered)")
plt.xlabel('Temperature [°C]')
plt.ylabel('Load relative to mean load')
plt.legend();
###Output
_____no_output_____
###Markdown
xarray
###Code
COUNTRIES = 'Austria', 'Germany', 'Switzerland', 'Italy', 'Spain', 'Sweden', 'United Kingdom'
###Output
_____no_output_____
###Markdown
From previous lecture...
###Code
CONFIG_FILE = '../entsoe-data.config'
if not os.path.exists(CONFIG_FILE):
download_dir = input('Path to ENTSO-E data folder: ')
if not os.path.isdir(download_dir):
raise RuntimeError(f'Invalid download_dir, please run cell again: {download_dir}')
with open(CONFIG_FILE, 'w') as f:
f.write(download_dir)
else:
with open(CONFIG_FILE) as f:
download_dir = f.read()
# Clear the output after this cell if you want to aovid having your path in the notebook (or execute it twice)!
def read_single_csv_entso_e(file):
return pd.read_csv(file, sep='\t', encoding='utf-16', parse_dates=["DateTime"])
def load_complete_entso_e_data(directory):
pattern = Path(directory) / '*.csv'
files = glob.glob(str(pattern))
if not files:
raise ValueError(f"No files found when searching in {pattern}, wrong directory?")
print(f'Concatenating {len(files)} csv files...')
each_csv_file = [read_single_csv_entso_e(file) for file in files]
data = pd.concat(each_csv_file, ignore_index=True)
data = data.sort_values(by=["AreaName", "DateTime"])
data = data.set_index("DateTime")
print("Loading done.")
return data
power_demand = load_complete_entso_e_data(download_dir)
def get_hourly_country_data(data, country):
ret_data = data[data["AreaName"] == country].interpolate()
#ret_data = ret_data.set_index("DateTime")
ret_data = ret_data.resample("1h").mean().interpolate()
return ret_data
###Output
_____no_output_____
###Markdown
Temperature dataERA5 data is provided as NetCDF file. The library `xarray` comes in very handy to load such files.
###Code
import xarray as xr
temperatures_dataset = xr.load_dataset('../data/temperatures_era5.nc')
temperatures_dataset
temperatures = temperatures_dataset.t2m
temperatures
###Output
_____no_output_____
###Markdown
Oh there are NaN values? How many of them?
###Code
total_size = temperatures.sizes['time'] * temperatures.sizes['latitude'] * temperatures.sizes['longitude']
float(np.isnan(temperatures).sum() / total_size)
###Output
_____no_output_____
###Markdown
Uh 55% of missing values.. That's not good! What could that be?
###Code
(~np.isnan(temperatures)).prod(dim='time').plot.imshow(cmap='gray')
###Output
_____no_output_____
###Markdown
Exercise 1Plot the mean temperature for each location!(There will be a warning because of the NaNs, but that's okay.)
###Code
temperatures.mean(dim='time').plot.imshow()
###Output
/opt/miniconda3/envs/scientific-computing/lib/python3.7/site-packages/xarray/core/nanops.py:142: RuntimeWarning: Mean of empty slice
return np.nanmean(a, axis=axis, dtype=dtype)
###Markdown
Temperature seems not to be in °C...
###Code
temperatures = temperatures - 273.15
temperatures.name = 'Temperature [C°]'
temperatures.mean(dim='time').plot.imshow()
###Output
_____no_output_____
###Markdown
Mean temperature for each countryAs a next step, we want to calculate the mean temperature for each country. Pick random grid points to calculate the mean We'll pick just some random samples from the grid for each country, to make computation of the man faster. The coordinates are already prepared as CSV file, which has been generated using the following code.
###Code
def choose_country_points(longitude, latitude, grid_points_per_country=20):
"""Pick random points for each country from the grid with axis ``longitude`` and ``latitude``.
``size`` is the number of points ot be picked for
Returns a dataframe with two columns per country (longitude & latitude)
and ``grid_points_per_country`` numbers of rows.
Note: GeoJSON always uses WGS84:
https://tools.ietf.org/html/rfc7946
"""
# local import to avoid dependency
import geopandas
from shapely.geometry import Point
longitudes, latitudes = np.meshgrid(longitude, latitude)
longitudes = longitudes.flatten()
latitudes = latitudes.flatten()
grid_points = geopandas.GeoSeries(geopandas.points_from_xy(longitudes.flatten(),
latitudes.flatten()))
# XXX fix me, correct path!
country_borders = geopandas.read_file('../data/countries.geojson')
chosen_gridpoints = pd.DataFrame()
for country in COUNTRIES:
print(f"Picking grid points for {country}...")
is_country = country_borders.ADMIN == country
country_border = country_borders[is_country].geometry.iloc[0]
is_in_country = grid_points.within(country_border)
number_of_points = is_in_country.sum()
# make things reproducible!
np.random.seed(42)
idcs = np.random.randint(number_of_points, size=grid_points_per_country)
chosen_gridpoints[f'{country}_longitude'] = longitudes[is_in_country][idcs]
chosen_gridpoints[f'{country}_latitude'] = latitudes[is_in_country][idcs]
return chosen_gridpoints
###Output
_____no_output_____
###Markdown
In order to recreate the `country_points.csv` one needs to install `geopandas` and download a `GeoJSON` file (23MB) which contains the country borders:
###Code
# !conda install --yes geopandas
# !wget -O ../data/countries.geojson https://raw.githubusercontent.com/datasets/geo-countries/master/data/countries.geojson
###Output
_____no_output_____
###Markdown
The following lines create the `country_points.csv`:
###Code
# country_points = choose_country_points(temperatures.longitude, temperatures.latitude)
# country_points.to_csv('../data/country_points.csv', index=False)
###Output
_____no_output_____
###Markdown
But since it is already prepared, let's just load it...
###Code
country_points = pd.read_csv('../data/country_points.csv')
country_points
###Output
_____no_output_____
###Markdown
Let's plote some of these points:
###Code
plt.plot(country_points['Austria_longitude'], country_points['Austria_latitude'], 'o')
plt.xlabel('Longitude [deg]')
plt.ylabel('Latitude [deg]');
plt.plot(country_points['Germany_longitude'], country_points['Germany_latitude'], 'o')
plt.xlabel('Longitude [deg]')
plt.ylabel('Latitude [deg]');
###Output
_____no_output_____
###Markdown
Calculate mean temperature deviation for grid points
###Code
def calc_country_temperature(country):
country_temperature = temperatures.sel(
longitude=country_points[f'{country}_longitude'],
latitude=country_points[f'{country}_latitude']).mean(dim=['longitude', 'latitude'])
return country_temperature
temperature_at = calc_country_temperature('Austria')
temperature_at.plot()
mean_temperatures.groupby(mean_temperatures)
###Output
_____no_output_____
###Markdown
Who likes to have it warm?
###Code
def plot_power_vs_temperatur(country):
power_demand_hourly = get_hourly_country_data(power_demand, country)["2015-01-01":"2019-12-31"]
country_temperature = calc_country_temperature(country)
idcs = (power_demand_hourly.index.weekday == 2) & (power_demand_hourly.index.hour == 9)
plt.plot(country_temperature.interp(time=power_demand_hourly.index[idcs]),
power_demand_hourly.TotalLoadValue[idcs] / power_demand_hourly.TotalLoadValue[idcs].mean(),
'o', markersize=3, label=country)
plt.xlabel('Temperature [C°]')
plt.ylabel('Ratio Load vs Average load [MW]')
for country in ['Spain', 'Italy', 'Austria', 'Germany', 'Sweden']:
plot_power_vs_temperatur(country)
plt.legend()
###Output
_____no_output_____
###Markdown
Use mean temperature deviation as feature for the load model
###Code
power_demand_hourly_at = get_hourly_country_data(power_demand, 'Austria')
power_demand_hourly_normal = power_demand_hourly["2015-01-01":"2019-12-31"]
power_demand_hourly_covid = power_demand_hourly["2020-01-01":"2020-05-31"].copy()
X = np.array([power_demand_hourly.index.dayofyear.values,
power_demand_hourly.index.weekday.values,
power_demand_hourly.index.hour.values]).T
Y = power_demand_hourly_normal["TotalLoadValue"].values
def extract_features(power_demand_hourly):
X = np.array([power_demand_hourly.index.dayofyear.values,
power_demand_hourly.index.weekday.values,
power_demand_hourly.index.hour.values]).T
return X
def plot_load_prediction_ratio(country):
"""
"""
print(f"Analyzing load data for '{country}'...")
power_demand_hourly = get_hourly_country_data(power_demand, country)
power_demand_hourly_normal = power_demand_hourly["2015-01-01":"2019-12-31"]
power_demand_hourly_covid = power_demand_hourly["2020-01-01":"2020-05-31"].copy()
X = extract_features(power_demand_hourly_normal)
Y = power_demand_hourly_normal["TotalLoadValue"].values
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2)
forest = ensemble.RandomForestRegressor()
forest.fit(X_train, Y_train)
prediction_train = forest.predict(X_train)
prediction_test = forest.predict(X_test)
print(f"{country}: R2 score (training/test): ", r2_score(Y_train, prediction_train),
"/", r2_score(Y_test, prediction_test))
X_covid = extract_features(power_demand_hourly_covid)
prediction_covid = forest.predict(X_covid)
power_demand_hourly_covid['Prediction'] = prediction_covid
power_demand_hourly_covid_monthly = power_demand_hourly_covid.resample('1m').mean()
ratio = power_demand_hourly_covid_monthly.TotalLoadValue / power_demand_hourly_covid_monthly.Prediction
ratio.plot(label=country)
###Output
_____no_output_____ |
3. Data Visualization : Matplotlib/Tugas Harian 5 Week 3.ipynb | ###Markdown
Downoad file vgsales.csv di sini
###Code
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('vgsales.csv')
df.head()
###Output
_____no_output_____ |
Edu/Module2/Project.ipynb | ###Markdown
Process MNIST dataset:- Normalize image data, convert labels to integers- Take note of image dimensions using image.shape, image.channels.Add a convolutional layer to your sequential() model prior to your input, hidden and output layers:- conv2D layer with 32 units, 3x3 kernal size, relu activation function- Second Conv2D activation layer with: 64 units and relu activation function- A Maxpooling2D layer with 2x2 pooling sizeAdd standard input, hidden, output layersSave output of model into a variable using .evaluate()
###Code
# Normalize image data, convert labels to integers
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
(train_images, train_labels), (test_images, test_labels) = keras.datasets.fashion_mnist.load_data()
# Normalize image data
# current range is 0-255, change to 0-1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Convert labels to integers
check_labels = set(train_labels)
print(check_labels)
# this shows that the labels already are integers
# Take note of image dimensions using image.shape, image.channels.
print(train_images[0].shape)
# I do not know the "image.channels" for a numpy.ndarray, so I suppose this should be image.size?
print(train_images.shape)
# Add a convolutional layer to your sequential() model prior to your input, hidden and output layers:
# conv2D layer with 32 units, 3x3 kernal size, relu activation function
# Second Conv2D activation layer with: 64 units and relu activation function
# A Maxpooling2D layer with 2x2 pooling size
# first prepare images for conv2D
train_batch_size = train_images.shape[0]
test_batch_size = test_images.shape[0]
train_images = train_images.reshape(train_batch_size, 28, 28,1).astype('float32')
test_images = test_images.reshape(test_batch_size, 28, 28,1).astype('float32')
model = keras.Sequential([
keras.layers.Conv2D(32,(3,3), input_shape=(28, 28,1), activation=tf.nn.relu),
keras.layers.Conv2D(64, (4,4), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size = (2,2)),
keras.layers.Flatten(),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=3)
# Save output of model into a variable using .evaluate()
evaluation = model.evaluate(test_images, test_labels, verbose=0)
evaluation
###Output
_____no_output_____ |
User Agent Analysis, July 8, 2016.ipynb | ###Markdown
Load CSV of User-Agents
###Code
science_user_agents = []
with open("private_data/science_comment_counts_by_ua.csv", "r") as f:
for row in csv.DictReader(f):
science_user_agents.append(row)
#science_user_agents[0].keys()
#Count Events ==> Number of comments
#user_id count ==> Unique users
total_commenters = sum([int(x['user_id count']) for x in science_user_agents])
print("Total commenters in last 28 days: {0}".format(total_commenters))
###Output
Total commenters in last 28 days: 39488
###Markdown
Parse User Agents
###Code
def get_ua_type(ua):
keys = ["BaconReader","Relay", "reddit is fun",
"Reddit/Version", "amrc", "laurencedawson",
"RedditAndroid", "Readit for WP", "AlienBlue",
"narwhal"]
for key in keys:
if(key in ua):
return "app"
if user_agents.parse(ua).is_mobile:
return "mobile"
return("desktop")
from collections import Counter, defaultdict
ua_sums = defaultdict(lambda: defaultdict(int))
totals = defaultdict(int)
for ua in science_user_agents:
ua_type = get_ua_type(ua['user_agent'])
for key in ['user_id count', 'Count Events']:
ua_sums[ua_type][key] += int(ua[key])
totals[key] += int(ua[key])
labels = {"user_id count": "Unique Commenters",
"Count Events": "Total Comments"}
for key in totals.keys():
pct_mobile = float(float(ua_sums["mobile"][key]) / float(totals[key])) * 100.
print("Mobile Web: {value:.2g}% of {key}".format(value = pct_mobile, key=labels[key]))
pct_desktop = float(float(ua_sums["desktop"][key]) / float(totals[key])) * 100.
print("Desktop Web: {value:.2g}% of {key}".format(value = pct_desktop, key=labels[key]))
pct_app = float(float(ua_sums["app"][key]) / float(totals[key])) * 100.
print("App: {value:.2g}% of {key}".format(value = pct_app, key=labels[key]))
print("")
print("")
###Output
Mobile Web: 8.2% of Unique Commenters
Desktop Web: 54% of Unique Commenters
App: 38% of Unique Commenters
Mobile Web: 9.4% of Total Comments
Desktop Web: 58% of Total Comments
App: 32% of Total Comments
|
Forest Fire Prediction Project/Forest_Fire_prediction_804861.ipynb | ###Markdown
Name : **Pratik Agrawal** Matrikel-Nr : **804861** Fire in the nature park***Problem Summary***The administration of the nature park Montesinho in north-east Portugal wants to predict wild fires based on wheather data of the Fire-Wheather-Index (FWI). The aim is to recognize the affected area and consequently the intensity of the imminent wild fire as early as possible in order to be able to adequatly assess the danger caused by the fire. To this aim, data from 517 wild fires have been collected. The features are summarized below***Features***- X (X-coordinate of the fire in the park: 1 to 9)- Y (Y-coordinate of the fire in the park: 2 to 9)- Month month: ”jan“ to ”dec“)- day (day: ”mon“ bis ”sun“)- FFMC (FFMC index of the FWI system: 18.7 to 96.2)- DMC (DMC index of the FWI system: 1.1 bis 291.3)- DC (DC index of the system: 7.9 bis 860.6)- ISI (ISI index of the FWI systems: 0.0 to 56.1)- temp (temperature in degrees Celsius: 2.2 to 33.3)- RH (relative humidity in %: 15 to 100)- wind (wind velocity in km/h: 0.4 to 9.4)- rain (rainfall in mm/m2: 0.0 to 6.4)- **area (forest area that has been burnt in hectare: 0.09 to 1090.84)**You have been asked to develop a model that predicts **the burnt forest** area as accurately as possible from the given data.***Exercise***Load the data into Python and preprocess them appropriately; perform an adequate normalization of the features. For example, the label area is distributed very non-uniformly such that a transformation such as area = log(1 + area) is appropriate. Identify andimplement a suitable learning method in Python. Train and evaluate the model. Propose a trivial baseline model with which you can compare your models performance. Apply a reasonable evaluation method. Provide a short documentation and motivation of each of your steps. **This notebook is primarily divided into two sections,** * In first section, Data-Preprocessing & code for original paper[1] is implemented. Motivation behind this step is to setup a baseline expectation for my own experiments and see if we can get better results then what is presented in paper. * Understanding Data & Problem from Authors Experiments & Results * In second section, Experimentation with different learning methods and their results are presented. What else can be done to improve predictions? [1] Paulo Cortez1 and Anibal Morais, A Data Mining Approach to Predict Forest Fires using Meteorological Data
###Code
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
import warnings
from sklearn.linear_model import BayesianRidge, LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.neural_network import MLPRegressor
from sklearn.svm import SVR
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import StandardScaler
from math import sqrt
from sklearn.metrics import r2_score
from sklearn.linear_model import LassoCV
from sklearn.linear_model import Lasso
from sklearn.feature_selection import SelectFromModel
from sklearn.linear_model import LinearRegression
from sklearn.feature_selection import RFE
from sklearn.feature_selection import RFECV
from mlxtend.feature_selection import SequentialFeatureSelector
from sklearn.model_selection import cross_val_score , GridSearchCV
from sklearn import metrics
from sklearn.metrics import mean_squared_error
from sklearn.metrics import accuracy_score
from sklearn.metrics import r2_score
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.metrics import confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.linear_model import LogisticRegression
from sklearn import tree
from sklearn import linear_model
warnings.filterwarnings("ignore")
df = pd.read_csv('fires.csv') # importing the dataset
###Output
_____no_output_____
###Markdown
Data Pre-processing & Analysis
###Code
df.info() # No Missing values
df.shape
df.describe()
###Output
_____no_output_____
###Markdown
distribution of independent variables- Skewed or not?
###Code
df.hist(figsize=(20,15))
###Output
_____no_output_____
###Markdown
Checking the independence between the X features, which can result in multicollinearity problem
###Code
#Remove some of the highly correlated independent variables. (if multicollinearity problem)
ncols=['FFMC', 'DMC', 'DC', 'ISI', 'temp', 'RH','wind', 'rain', 'area']
cm = np.corrcoef(df[ncols].values.T)
f, ax = plt.subplots(figsize =(8, 6))
sns.heatmap(cm, ax = ax, cmap ="YlGnBu",annot=True,linewidths = 0.1, yticklabels = ncols,xticklabels = ncols)
###Output
_____no_output_____
###Markdown
Looking at the distribution of dependent variable & applying log transformation as the label is skewed
###Code
# Fired area in histogram
df["area"].plot(kind='hist', bins=10)
# Applying Log Transformation
df["area_ln"]=[ 0 if np.isinf(x) else x for x in (df["area"]+1).apply(np.log) ]
## After log applied
df["area_ln"].plot(kind='hist', bins=10)
###Output
_____no_output_____
###Markdown
Some Observations from Data- There is no **null** values in data- X,Y, month, day columns are **categorical values** - The burned area is shown in Figure, denoting a positive skew, with the majority of the fires presenting a small size. Regarding the present dataset, there are 247 samples with a zero value. To reduce skewness and improve symmetry, the logarithm function y = ln(x + 1), which is a common transformation that tends to improve regression results for right-skewed targets, was applied to the area attribute. Transforming Categorical features into numerical features & Standardizing other numerical features- all attributes were standardized to a zero mean and one standard deviation- one hot encoding for categorical values- Feature Grouping: STFWI – using spatial, temporal and the four FWI components; STM– with the spatial, temporal and four weather variables; FWI – using only the four FWI components; andM– with the four weather conditions. To
###Code
labels_transformed = df.pop('area_ln')
labels_original = df.pop('area')
cols = [col for col in df.columns if col not in ['X','Y','month','day']]
dataset_for_std = df[cols]
cols = [col for col in df.columns if col in ['X','Y','month','day']]
dataset_for_dummies = df[cols]
dataset_for_std.head()
dataset_for_dummies.head() # Categorical Features
# Standardizing the dataset
'''Standardization refers to shifting the distribution of each attribute to have a mean of zero
and a standard deviation of one (unit variance).'''
scaled_features = StandardScaler().fit_transform(dataset_for_std.values)
# Creating back the dataframe of the scaled data
dataset = pd.DataFrame(scaled_features, index=dataset_for_std.index, columns=dataset_for_std.columns)
dataset.head()
categorical_features =pd.get_dummies(data=dataset_for_dummies, columns=['X','Y','month','day'], drop_first=True)
dataset = pd.concat([dataset, categorical_features], axis=1, sort=False)
## Droping Extra catergorical value field (Degree of freedom from categorical values : n-1)
#dataset=dataset.drop(columns=['day_fri', 'month_dec'])
dataset.head()
# Partitioning the dataset into test and train (test = (25% of total data) and train = (75% of total data) )
X_train, X_test, y_train, y_test = train_test_split(dataset, labels_transformed, test_size=0.25, random_state = 4)
# Importing all the required Regressors used in the paper
clf_nb = BayesianRidge(compute_score=True) # Naive-Bayes Ridge Regressor
clf_mr = LinearRegression() # Multiple-Regression
clf_dt = DecisionTreeRegressor(max_depth=2,random_state=0) # Decision Tree Regressor
clf_rf = RandomForestRegressor(max_depth=2, random_state=0, n_estimators=10) # Random Forest Regressor
clf_nn = MLPRegressor() # Neural Networks Regressor
# for STFWI : Spatial, temporal, and Forest Waether Index features are taken
cols = [col for col in X_train.columns if col not in ['temp', 'RH','wind','rain']]
train_features_one = X_train[cols].copy()
test_features_one = X_test[cols].copy()
train_features_one.head()
def calculate_errors(predictions,method):
## get originl area from the transformed one.
y_test_orig = np.exp(y_test)-1
predictions_inverse = np.exp(predictions)-1
y_test_orig[y_test_orig < 0] = 0
predictions_inverse[predictions_inverse<0] = 0
# RootMeanSquared Error Calculation
print("\n\nFor "+method)
meanSquaredError_nb_one = mean_squared_error(y_test_orig, predictions_inverse)
#print("MSE:", meanSquaredError_nb_one)
rootMeanSquaredError_nb_one = sqrt(meanSquaredError_nb_one)
print("RMSE:", rootMeanSquaredError_nb_one)
# MeanAbsolute Error Calculation
absolute_error_nb_one = mean_absolute_error(y_test_orig, predictions_inverse)
print("Absolute error is:", absolute_error_nb_one)
meanSquaredError = mean_squared_error(y_test, predictions)
rootMeanSquaredError = sqrt(meanSquaredError)
print("RMSE_transformed:", rootMeanSquaredError)
#print("R2 error is:",r2_score(y_test_orig, predictions_inverse))
# Naive Bayes Regressor
# Training using Naive Bayes Regressor
clf_nb.fit(train_features_one, y_train)
# Testing using Naive Bayes Regressor
predictions_nb_one = clf_nb.predict(test_features_one)
# Linear Regression
# Training using Linear Regression
clf_mr.fit(train_features_one, y_train)
# Testing using Linear Regression
predictions_mr_one = clf_mr.predict(test_features_one)
# Decision Trees Regressor
# Training using Decision Trees Regressor
clf_dt.fit(train_features_one, y_train)
# Testing using Decision Trees Regressor
predictions_dt_one = clf_dt.predict(test_features_one)
# Random Forest Regressor
# Training using Random Forest Regressor
clf_rf.fit(train_features_one, y_train)
# Testing using Random Forest Regressor
predictions_rf_one = clf_rf.predict(test_features_one)
# MLP Regressor
# Training using MLP Regressor
clf_nn.fit(train_features_one, y_train)
# Testing using MLP Regressor
predictions_nn_one = clf_nn.predict(test_features_one)
# SVM Regressor
clf_svm = SVR(C=3.0, epsilon=0.2) # SVM Regressor
# Training using SVM Regressor
clf_svm.fit(train_features_one, y_train)
# Testing using SVM Regressor
predictions_svm_one = clf_svm.predict(test_features_one)
calculate_errors(predictions_nb_one,"Naive Bayes")
calculate_errors(predictions_mr_one,"Linear Regression")
calculate_errors(predictions_dt_one,"Decision Trees")
calculate_errors(predictions_rf_one,"Random Forest")
calculate_errors(predictions_nn_one,"MLP ")
calculate_errors(predictions_svm_one,"SVM ")
# for STM : Spatial, temporal, and Meterological features are taken
cols = [col for col in X_train.columns if col not in ['FFMC', 'DMC', 'DC', 'ISI']]
train_features_two = X_train[cols].copy()
test_features_two = X_test[cols].copy()
train_features_two.head()
# Importing all the required Regressors used in the paper
clf_nb = BayesianRidge(compute_score=True) # Naive-Bayes Ridge Regressor
clf_mr = LinearRegression() # Multiple-Regression
clf_dt = DecisionTreeRegressor(max_depth=2,random_state=0) # Decision Tree Regressor
clf_rf = RandomForestRegressor(max_depth=2, random_state=0, n_estimators=10) # Random Forest Regressor
clf_nn = MLPRegressor() # Neural Networks Regressor
# Naive Bayes Regressor
# Training using Naive Bayes Regressor
clf_nb.fit(train_features_two, y_train)
# Testing using Naive Bayes Regressor
predictions_nb_two = clf_nb.predict(test_features_two)
# Linear Regression
# Training using Linear Regression
clf_mr.fit(train_features_two, y_train)
# Testing using Linear Regression
predictions_mr_two = clf_mr.predict(test_features_two)
# Decision Trees Regressor
# Training using Decision Trees Regressor
clf_dt.fit(train_features_two, y_train)
# Testing using Decision Trees Regressor
predictions_dt_two = clf_dt.predict(test_features_two)
# Random Forest Regressor
# Training using Random Forest Regressor
clf_rf.fit(train_features_two, y_train)
# Testing using Random Forest Regressor
predictions_rf_two = clf_rf.predict(test_features_two)
# MLP Regressor
# Training using MLP Regressor
clf_nn.fit(train_features_two, y_train)
# Testing using MLP Regressor
predictions_nn_two = clf_nn.predict(test_features_two)
# SVM Regressor
# Training using SVM Regressor
clf_svm.fit(train_features_two, y_train)
# Testing using SVM Regressor
predictions_svm_two = clf_svm.predict(test_features_two)
calculate_errors(predictions_nb_two,"Naive Bayes")
calculate_errors(predictions_mr_two,"Linear Regression")
calculate_errors(predictions_dt_two,"Decision Trees")
calculate_errors(predictions_rf_two,"Random Forest")
calculate_errors(predictions_nn_two,"MLP ")
calculate_errors(predictions_svm_two,"SVM ")
# for FWI : Forest Weather Index features are taken
train_features_three = X_train[['FFMC', 'DMC', 'DC', 'ISI']].copy()
test_features_three = X_test[['FFMC', 'DMC', 'DC', 'ISI']].copy()
train_features_three.head()
# Naive Bayes Regressor
# Training using Naive Bayes Regressor
clf_nb.fit(train_features_three, y_train)
# Testing using Naive Bayes Regressor
predictions_nb_three = clf_nb.predict(test_features_three)
# Linear Regression
# Training using Linear Regression
clf_mr.fit(train_features_three, y_train)
# Testing using Linear Regression
predictions_mr_three = clf_mr.predict(test_features_three)
# Decision Trees Regressor
# Training using Decision Trees Regressor
clf_dt.fit(train_features_three, y_train)
# Testing using Decision Trees Regressor
predictions_dt_three = clf_dt.predict(test_features_three)
# Random Forest Regressor
# Training using Random Forest Regressor
clf_rf.fit(train_features_three, y_train)
# Testing using Random Forest Regressor
predictions_rf_three = clf_rf.predict(test_features_three)
# MLP Regressor
# Training using MLP Regressor
clf_nn.fit(train_features_three, y_train)
# Testing using MLP Regressor
predictions_nn_three = clf_nn.predict(test_features_three)
# SVM Regressor
# Training using SVM Regressor
clf_svm.fit(train_features_three, y_train)
# Testing using SVM Regressor
predictions_svm_three = clf_svm.predict(test_features_three)
calculate_errors(predictions_nb_three,"Naive Bayes")
calculate_errors(predictions_mr_three,"Linear Regression")
calculate_errors(predictions_dt_three,"Decision Trees")
calculate_errors(predictions_rf_three,"Random Forest")
calculate_errors(predictions_nn_three,"MLP ")
calculate_errors(predictions_svm_three,"SVM ")
# for M (using only four weather conditions)
train_features_four = X_train[[ 'temp', 'RH', 'wind', 'rain']].copy()
test_features_four = X_test[['temp', 'RH', 'wind', 'rain']].copy()
train_features_four.head()
# Naive Bayes Regressor
# Training using Naive Bayes Regressor
clf_nb.fit(train_features_four, y_train)
# Testing using Naive Bayes Regressor
predictions_nb_four = clf_nb.predict(test_features_four)
# Linear Regression
# Training using Linear Regression
clf_mr.fit(train_features_four, y_train)
# Testing using Linear Regression
predictions_mr_four = clf_mr.predict(test_features_four)
# Decision Trees Regressor
# Training using Decision Trees Regressor
clf_dt.fit(train_features_four, y_train)
# Testing using Decision Trees Regressor
predictions_dt_four = clf_dt.predict(test_features_four)
# Random Forest Regressor
# Training using Random Forest Regressor
clf_rf.fit(train_features_four, y_train)
# Testing using Random Forest Regressor
predictions_rf_four = clf_rf.predict(test_features_four)
# MLP Regressor
# Training using MLP Regressor
clf_nn.fit(train_features_four, y_train)
# Testing using MLP Regressor
predictions_nn_four = clf_nn.predict(test_features_four)
# SVM Regressor
# Training using SVM Regressor
clf_svm.fit(train_features_four, y_train)
# Testing using SVM Regressor
predictions_svm_four = clf_svm.predict(test_features_four)
calculate_errors(predictions_nb_four,"Naive Bayes")
calculate_errors(predictions_mr_four,"Linear Regression")
calculate_errors(predictions_dt_four,"Decision Trees")
calculate_errors(predictions_rf_four,"Random Forest")
calculate_errors(predictions_nn_four,"MLP ")
calculate_errors(predictions_svm_four,"SVM ")
#K_fold_measures(model,Data[[ 'temp', 'RH', 'wind', 'rain']],labels_transformed)
###Output
For Naive Bayes
RMSE: 67.21807118673051
Absolute error is: 13.232648010939203
RMSE_transformed: 1.4099069893632237
For Linear Regression
RMSE: 67.14968132019081
Absolute error is: 13.262579398807931
RMSE_transformed: 1.4072810526892718
For Decision Trees
RMSE: 67.08364180573587
Absolute error is: 13.200964888853713
RMSE_transformed: 1.3935246007723923
For Random Forest
RMSE: 66.94606044979211
Absolute error is: 13.227860012194103
RMSE_transformed: 1.3875850229204518
For MLP
RMSE: 67.08198769400396
Absolute error is: 13.149744813225778
RMSE_transformed: 1.3856663154488722
For SVM
RMSE: 67.29116395680332
Absolute error is: 12.87551890573591
RMSE_transformed: 1.473872924707372
###Markdown
 Results are not matching exactly because hyperparameters for all methods were not given in the paper. Also, The measurements presented here are only for test set not the entire data using CV as paper did. Model Fitting, Validation & Experiments
###Code
# for all features
train_features_five = X_train.copy()
test_features_five = X_test.copy()
train_features_five.head()
# Naive Bayes Regressor
# Training using Naive Bayes Regressor
clf_nb.fit(train_features_five, y_train)
# Testing using Naive Bayes Regressor
predictions_nb_five = clf_nb.predict(test_features_five)
# Linear Regression
# Training using Linear Regression
clf_mr.fit(train_features_five, y_train)
# Testing using Linear Regression
predictions_mr_five = clf_mr.predict(test_features_five)
# Decision Trees Regressor
# Training using Decision Trees Regressor
clf_dt.fit(train_features_five, y_train)
# Testing using Decision Trees Regressor
predictions_dt_five = clf_dt.predict(test_features_five)
# Random Forest Regressor
# Training using Random Forest Regressor
clf_rf.fit(train_features_five, y_train)
# Testing using Random Forest Regressor
predictions_rf_five = clf_rf.predict(test_features_five)
# MLP Regressor
# Training using MLP Regressor
clf_nn.fit(train_features_five, y_train)
# Testing using MLP Regressor
predictions_nn_five = clf_nn.predict(test_features_five)
# SVM Regressor
# Training using SVM Regressor
clf_svm.fit(train_features_five, y_train)
# Testing using SVM Regressor
predictions_svm_five = clf_svm.predict(test_features_five)
calculate_errors(predictions_nb_five,"Naive Bayes")
calculate_errors(predictions_mr_five,"Linear Regression")
calculate_errors(predictions_dt_five,"Decision Trees")
calculate_errors(predictions_rf_five,"Random Forest")
calculate_errors(predictions_nn_five,"MLP ")
calculate_errors(predictions_svm_five,"SVM ")
lasso = LassoCV().fit(train_features_one, y_train)
predictions_lasso = lasso.predict(test_features_one)
calculate_errors(predictions_lasso,"Lasso STFWI")
lasso = LassoCV().fit(train_features_two, y_train)
predictions_lasso = lasso.predict(test_features_two)
calculate_errors(predictions_lasso,"Lasso STM")
lasso = LassoCV().fit(train_features_three, y_train)
predictions_lasso = lasso.predict(test_features_three)
calculate_errors(predictions_lasso,"Lasso FWI")
lasso = LassoCV().fit(train_features_four, y_train)
predictions_lasso = lasso.predict(test_features_four)
calculate_errors(predictions_lasso,"Lasso M")
lasso = LassoCV().fit(train_features_five, y_train)
predictions_lasso = lasso.predict(test_features_five)
calculate_errors(predictions_lasso,"Lasso All")
###Output
For Lasso STFWI
RMSE: 67.22839073346624
Absolute error is: 13.212248067766884
RMSE_transformed: 1.4153399874784804
For Lasso STM
RMSE: 67.20270306204736
Absolute error is: 13.221775853379288
RMSE_transformed: 1.4063764381248856
For Lasso FWI
RMSE: 67.19859879683105
Absolute error is: 13.219737700672423
RMSE_transformed: 1.4159847939271413
For Lasso M
RMSE: 67.20571579845114
Absolute error is: 13.24627079343826
RMSE_transformed: 1.4120970785503273
For Lasso All
RMSE: 67.19578253148653
Absolute error is: 13.205190582934208
RMSE_transformed: 1.4093048558299972
###Markdown
the error measures for lasso model for all feature combinations are kind of similar with no significant improvements Select top features with SVM - Experiment - SFSs eliminate (or add) features based on a user-defined classifier/regression performance metric. - The motivation behind feature selection algorithms is to automatically select a subset of features that is most relevant to the problem. - Ref: http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/
###Code
from mlxtend.feature_selection import SequentialFeatureSelector ## Sequential Selection
from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs
import matplotlib.pyplot as plt
sfs = SequentialFeatureSelector(SVR(C=3.0, epsilon=0.2), k_features=39,
forward=True,
floating=False,
scoring='neg_root_mean_squared_error',
#scoring='neg_mean_squared_error',
cv=10)
sfs.fit(X_train, y_train)
plt.figsize=(20,15)
fig = plot_sfs(sfs.get_metric_dict(), kind='std_err')
plt.title('Sequential Forward Selection (w. StdErr)')
plt.grid()
plt.show()
sfs = SequentialFeatureSelector(SVR(C=3.0, epsilon=0.2), k_features=(15, 17), # range of feature
forward=True,
floating=False,
scoring='neg_root_mean_squared_error',
cv=10)
sfs.fit(X_train, y_train)
print('best combination (ACC: %.3f): %s\n' % (sfs.k_score_, sfs.k_feature_idx_))
fig = plot_sfs(sfs.get_metric_dict(), kind='std_err')
plt.title('Sequential Forward Selection (w. StdErr)')
plt.grid()
plt.show()
sfs.k_feature_names_
top_attributes = [7, 9, 11, 12, 13, 20, 21, 23, 24, 25, 26, 28, 30, 31, 34, 36, 37]
train_features_six = X_train.iloc[:,top_attributes].copy()
test_features_six = X_test.iloc[:,top_attributes].copy()
train_features_six.head()
# SVM Regressor
# Training using SVM Regressor
clf_svm.fit(train_features_six, y_train)
# Testing using SVM Regressor
predictions_svm_six = clf_svm.predict(test_features_six)
calculate_errors(predictions_svm_six,"SVM 1 feature")
###Output
For SVM 1 feature
RMSE: 67.40908649488266
Absolute error is: 12.996974358050915
RMSE_transformed: 1.494251724883043
###Markdown
results for top features for SVM are slightly better than then when we use all features, BUT not better then the Mat. Features
###Code
## For Hyper parameter selection used Grid Search
def parameter_selection(model,param_grid, X_train, y_train, nfolds=10):
grid_search = GridSearchCV(model, param_grid, cv=nfolds,verbose =1,n_jobs=4)
grid_search.fit(X_train, y_train)
print(grid_search.best_params_)
return grid_search
###Output
_____no_output_____
###Markdown
Turn the problem into binary classification problem, predict if there is going to be fire or not. if the classifier predicts there is going to be fire then the inference can be done further for burn area, otherwise burn area is predicted is 0.
###Code
# add classes
## Creating a categorical output feature
X_train.head()
dataset = pd.concat([dataset, labels_original], axis=1, sort=False)
dataset.head()
###Output
_____no_output_____
###Markdown
Creating a categorical output feature
###Code
## Creating a categorical output feature
Data = dataset
Data['burned']=1
Data['burned'][Data["area"]==0]=0
Data_with_burned = Data.copy()
Data['burned'].value_counts()
Data.head()
Data.pop('area')
Data_with_burned.pop('area')
labels_classification = Data.pop('burned')
Data.head()
X_train, X_test, y_train, y_test = train_test_split(Data, labels_classification, test_size=0.2, random_state=0)
## Applying logistic regression with different parameters
## C - Margin : smaller values specify stronger regularization
LGR_grid_result=parameter_selection(LogisticRegression(),{'penalty':['l1', 'l2'],'C':[0.00001, 0.0001, 0.001,0.01,0.5,1,10,100]}, X_train, y_train,10)
LGR_grid_result.best_estimator_
## Applying SVM with different parameters
from sklearn.svm import SVC
SVC_param_grid = {'C': [0.001, 0.01, 0.1, 1, 10,50,100,150], 'gamma' : [0.0001,0.001, 0.01, 0.1, 1,10,100], 'kernel':['poly','rbf']}
SVC_grid_result=parameter_selection(SVC(),SVC_param_grid, X_train, y_train,5)
SVC_grid_result.best_estimator_
names = ["Logistic Regression","Linear SVM", "gridsearch SVM" ,"RBF SVM",
"Decision Tree", "Random Forest"]
classifiers = [
LGR_grid_result.best_estimator_,
SVC(kernel="linear", C=0.025),
SVC_grid_result.best_estimator_,
SVC(gamma=2, C=1),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1)
]
# iterate over classifiers
for name, clf in zip(names, classifiers):
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
## Validation
y_pred = clf.predict(X_test)
y_train_pred=clf.predict(X_train)
## Confusion Matrix
print(name)
print('---------------')
fig, ax = plt.subplots(figsize=(3,3))
sns.heatmap(confusion_matrix(y_test, y_pred), annot=True, fmt='d',xticklabels=['Yes','No'], yticklabels=['Yes','No'])
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.show()
## K Fold Validation with 10 Folds
scores = cross_val_score(clf, Data,labels_classification, cv=10, scoring="accuracy")
scores=np.sqrt(scores)
print('---------------')
print("Means Cross Validation Accuracy Score :"+str(np.mean(scores)))
print('Precision : From the Prediction of burned, '+str(round(np.mean(np.sqrt(cross_val_score(clf, Data, labels_classification, cv=10, scoring="recall"))),2)*100)+'% values are predicted correctly')
print('Recall : From overall burned, '+str(round(np.mean(np.sqrt(cross_val_score(clf, Data, labels_classification, cv=10, scoring="precision"))),2)*100)+'% values are predicted correctly')
print('AUC : '+str(round(np.mean(np.sqrt(cross_val_score(clf, Data, labels_classification, cv=10, scoring="roc_auc"))),2)*100)+'%')
print('---------------')
###Output
Logistic Regression
---------------
###Markdown
Decision Tree has the best AUC (71%) from all above classifiers, But it does not have better precision and recall score when compared to RBF SVM. Precision(98%) and Recall(73%) wise RBF SVM is doing well compared to all other classifiers. RBF SVM also has AUC of 70% which is second best.
###Code
def K_fold_measures(model,Data,labels):
## K Fold Validation
scores = cross_val_score(model, Data, labels, cv=10, scoring="neg_mean_squared_error")
scores=np.sqrt(abs(scores))
print(" ")
#print("Cross Validation RMSE Scores "+str(scores))
print("Cross Validation RMSE Mean Score "+str(np.mean(scores, dtype=np.float64)))
## RMSE of Orginal Area
#print("Cross Validation RMSE Mean Score Orginial Burned Area Value "+str(np.exp(np.mean(scores, dtype=np.float64))-1))
O_Data_pred=model.predict(Data)
print("R2 Score "+str(r2_score(labels,O_Data_pred)))
area_original_normalized = np.exp(labels)-1
area_predicted_normalized =np.exp( O_Data_pred)-1
# RootMeanSquared Error Calculation
# print("\n\nFor "+model.str())
meanSquaredError_nb_one = mean_squared_error(area_original_normalized, area_predicted_normalized)
#print("MSE:", meanSquaredError_nb_one)
rootMeanSquaredError_nb_one = sqrt(meanSquaredError_nb_one)
print("RMSE:", rootMeanSquaredError_nb_one)
# MeanAbsolute Error Calculation
absolute_error_nb_one = mean_absolute_error(area_original_normalized, area_predicted_normalized)
print("Absolute error is:", absolute_error_nb_one)
#print("R2 error is(true Area values):",r2_score(area_original_normalized, area_predicted_normalized))
Data.head()
# Partitioning the dataset into test and train (test = (25% of total data) and train = (75% of total data) )
X_train, X_test, y_train, y_test = train_test_split(Data, labels_transformed, test_size=0.25, random_state = 4)
# for M (using only four weather conditions)
train_features_four = X_train[[ 'temp', 'RH', 'wind', 'rain']].copy()
test_features_four = X_test[['temp', 'RH', 'wind', 'rain']].copy()
train_features_four.head()
##gamma parameter defines how far the influence of a single training example reaches,
#with low values meaning ‘far’ and high values meaning ‘close’
#large C - avoid missclassification, small margin, converse
SVR_param_grid = {'C': [0.001, 0.1, 1,3, 10,100], 'gamma' : [0.0001,0.001, 0.01, 0.1, 1,10],
'kernel':['rbf','poly','linear'],'epsilon': [ 0.001, 0.01, 0.1,1, 10]}
#SVR_grid_result=param_selection(SVR(),SVR_param_grid,Data[[ 'temp', 'RH', 'wind', 'rain']], labels_transformed,10)
#SVR_grid_result.best_estimator_
# All model comparision
models = [
LinearRegression(fit_intercept=True, n_jobs=None,normalize=False),
linear_model.Lasso(alpha=.01, normalize=False),
linear_model.Ridge(alpha=1e-05, normalize=False),
SVR(C=3, cache_size=200, coef0=0.0, degree=3, epsilon=0.2, gamma=2, kernel='rbf', tol=0.001)
]
for model in models:
model_name = model.__class__.__name__
model.fit(train_features_four,y_train)
print(model_name)
print("------------")
K_fold_measures(model,Data[[ 'temp', 'RH', 'wind', 'rain']],labels_transformed)
print("------------")
###Output
LinearRegression
------------
Cross Validation RMSE Mean Score 1.4618932364863693
R2 Score 0.009249061084740329
RMSE: 64.45331689461199
Absolute error is: 12.981409624272152
------------
Lasso
------------
Cross Validation RMSE Mean Score 1.4048544037437203
R2 Score 0.009019618025797316
RMSE: 64.45894912156558
Absolute error is: 12.980707755641237
------------
Ridge
------------
Cross Validation RMSE Mean Score 1.4618931610579644
R2 Score 0.009249061106068712
RMSE: 64.4533168959954
Absolute error is: 12.981409623755866
------------
SVR
------------
Cross Validation RMSE Mean Score 1.5188610381865997
R2 Score 0.21465573293160112
RMSE: 62.88571683107614
Absolute error is: 11.786898493415132
------------
###Markdown
with newly calculated 'Burned' Feature
###Code
Data_with_burned.columns
X_train, X_test, y_train, y_test = train_test_split(Data_with_burned, labels_transformed, test_size=0.25, random_state = 4)
# for M (using only four weather conditions)
train_features_four = X_train[[ 'temp', 'RH', 'wind', 'rain','burned']].copy()
test_features_four = X_test[['temp', 'RH', 'wind', 'rain','burned']].copy()
train_features_four.head()
# All model comparision
#R-squared is the fraction by which the variance of the errors is less than the variance of the dependent variable
models = [
LinearRegression(fit_intercept=True, n_jobs=None,normalize=False),
linear_model.Lasso(alpha=.01, normalize=False),
linear_model.Ridge(alpha=1e-05, normalize=False),
SVR(C=3, cache_size=200, coef0=0.0, degree=3, epsilon=0.2, gamma=2, kernel='rbf', tol=0.001)
]
for model in models:
model_name = model.__class__.__name__
model.fit(train_features_four,y_train)
print(model_name)
print("------------")
K_fold_measures(model,Data_with_burned[[ 'temp', 'RH', 'wind', 'rain','burned']],labels_transformed)
print("------------")
###Output
LinearRegression
------------
Cross Validation RMSE Mean Score 0.8129440353880539
R2 Score 0.5798291311064595
RMSE: 63.61345983040902
Absolute error is: 11.554764291195582
------------
Lasso
------------
Cross Validation RMSE Mean Score 0.8086321962332323
R2 Score 0.578941766616914
RMSE: 63.637648965077766
Absolute error is: 11.557029865558022
------------
Ridge
------------
Cross Validation RMSE Mean Score 0.81294402977069
R2 Score 0.5798291309289925
RMSE: 63.61345994614497
Absolute error is: 11.554764251878618
------------
SVR
------------
Cross Validation RMSE Mean Score 1.015888907568905
R2 Score 0.6470895423519027
RMSE: 61.39442317111525
Absolute error is: 10.276474439920184
------------
|
examples/[Library Basics]/algorithms how to/td3.ipynb | ###Markdown
td3
###Code
# == recnn ==
import sys
sys.path.append("../../../")
import recnn
import torch
from torch.utils.tensorboard import SummaryWriter
import torch.nn as nn
from tqdm.auto import tqdm
tqdm.pandas()
from jupyterthemes import jtplot
jtplot.style(theme='grade3')
frame_size = 10
batch_size = 25
# embeddgings: https://drive.google.com/open?id=1EQ_zXBR3DKpmJR3jBgLvt-xoOvArGMsL
env = recnn.data.env.FrameEnv('../../../data/embeddings/ml20_pca128.pkl',
'../../../data/ml-20m/ratings.csv', frame_size, batch_size)
# test function
def run_tests():
batch = next(iter(env.test_dataloader))
loss = td3.update(batch, learn=False)
return loss
value1_net = recnn.nn.Critic(1290, 128, 256, 54e-2)
value2_net = recnn.nn.Critic(1290, 128, 256, 54e-2)
policy_net = recnn.nn.Actor(1290, 128, 256, 6e-1)
cuda = torch.device('cuda')
td3 = recnn.nn.TD3(policy_net, value1_net, value2_net)
td3 = td3.to(cuda)
from time import gmtime, strftime
td3.writer = SummaryWriter(log_dir='../../../runs/td3_{}/'.format(strftime("%m-%d_%H:%M", gmtime())))
plotter = recnn.utils.Plotter(td3.loss_layout, [['value1', 'value2'],['policy']],)
from IPython.display import clear_output
import matplotlib.pyplot as plt
%matplotlib inline
plot_every = 50
n_epochs = 2
td3._step = 0
def learn():
for epoch in range(n_epochs):
for batch in tqdm(env.train_dataloader):
loss = td3.update(batch, learn=True)
plotter.log_losses(loss)
td3.step()
if td3._step % plot_every == 0:
clear_output(True)
print('step', td3._step)
test_loss = run_tests()
plotter.log_losses(test_loss, test=True)
plotter.plot_loss()
if td3._step > 4000: # adjust when it needs to stop
return
learn()
torch.save(td3.nets['policy_net'].state_dict(), '../../../models/td3_policy.model')
torch.save(td3.nets['value_net1'].state_dict(), '../../../models/td3_value1.model')
torch.save(td3.nets['value_net2'].state_dict(), '../../../models/td3_value2.model')
gen_actions = td3.debug['next_action']
true_actions = env.embeddings.numpy()
ad = recnn.nn.AnomalyDetector().to(cuda)
ad.load_state_dict(torch.load('../../../models/anomaly.pt'))
ad.eval()
plotter.plot_kde_reconstruction_error(ad, gen_actions, true_actions, cuda)
###Output
_____no_output_____
###Markdown
td3
###Code
# == recnn ==
import sys
sys.path.append("../../../")
import recnn
import torch
from torch.utils.tensorboard import SummaryWriter
import torch.nn as nn
from tqdm.auto import tqdm
tqdm.pandas()
from jupyterthemes import jtplot
jtplot.style(theme='grade3')
frame_size = 10
batch_size = 25
# embeddgings: https://drive.google.com/open?id=1EQ_zXBR3DKpmJR3jBgLvt-xoOvArGMsL
dirs = recnn.data.env.DataPath(
base="../../../data/",
embeddings="embeddings/ml20_pca128.pkl",
ratings="ml-20m/ratings.csv",
cache="cache/frame_env.pkl",
use_cache=True
)
env = recnn.data.env.FrameEnv(dirs, frame_size, batch_size)
# test function
def run_tests():
batch = next(iter(env.test_dataloader))
loss = td3.update(batch, learn=False)
return loss
value1_net = recnn.nn.Critic(1290, 128, 256, 54e-2)
value2_net = recnn.nn.Critic(1290, 128, 256, 54e-2)
policy_net = recnn.nn.Actor(1290, 128, 256, 6e-1)
cuda = torch.device('cuda')
td3 = recnn.nn.TD3(policy_net, value1_net, value2_net)
td3 = td3.to(cuda)
from time import gmtime, strftime
td3.writer = SummaryWriter(log_dir='../../../runs/td3_{}/'.format(strftime("%m-%d_%H:%M", gmtime())))
plotter = recnn.utils.Plotter(td3.loss_layout, [['value1', 'value2'],['policy']],)
from IPython.display import clear_output
import matplotlib.pyplot as plt
%matplotlib inline
plot_every = 50
n_epochs = 2
td3._step = 0
def learn():
for epoch in range(n_epochs):
for batch in tqdm(env.train_dataloader):
loss = td3.update(batch, learn=True)
plotter.log_losses(loss)
td3.step()
if td3._step % plot_every == 0:
clear_output(True)
print('step', td3._step)
test_loss = run_tests()
plotter.log_losses(test_loss, test=True)
plotter.plot_loss()
if td3._step > 4000: # adjust when it needs to stop
return
learn()
torch.save(td3.nets['policy_net'].state_dict(), '../../../models/td3_policy.model')
torch.save(td3.nets['value_net1'].state_dict(), '../../../models/td3_value1.model')
torch.save(td3.nets['value_net2'].state_dict(), '../../../models/td3_value2.model')
gen_actions = td3.debug['next_action']
true_actions = env.embeddings.numpy()
ad = recnn.nn.AnomalyDetector().to(cuda)
ad.load_state_dict(torch.load('../../../models/anomaly.pt'))
ad.eval()
plotter.plot_kde_reconstruction_error(ad, gen_actions, true_actions, cuda)
###Output
_____no_output_____ |
Streamlit_Colab/10_Streamlit__Colab_Student_Feedback_.ipynb | ###Markdown
Tutorial 10. Student Feedback authored by [@Vivika_Martini](https://discuss.streamlit.io/t/student-feedback-form-crud-app/8745) 1) Run all and click the "**Link to web app**" at the bottom. 2) Modify the code in `app.py` Setup(pip install what you need ...)
###Code
#@title -----------> Installation of Streamlit and pyngrok of course!!
!pip -q install streamlit
!pip -q install pyngrok
###Output
[K |████████████████████████████████| 7.5MB 5.6MB/s
[K |████████████████████████████████| 112kB 57.9MB/s
[K |████████████████████████████████| 163kB 51.2MB/s
[K |████████████████████████████████| 4.5MB 49.0MB/s
[K |████████████████████████████████| 81kB 8.4MB/s
[K |████████████████████████████████| 71kB 6.9MB/s
[K |████████████████████████████████| 122kB 48.0MB/s
[?25h Building wheel for blinker (setup.py) ... [?25l[?25hdone
[31mERROR: google-colab 1.0.0 has requirement ipykernel~=4.10, but you'll have ipykernel 5.4.3 which is incompatible.[0m
Building wheel for pyngrok (setup.py) ... [?25l[?25hdone
###Markdown
The following is the `app.py` base code. It can be modified in the cell or in the folder to the left.
###Code
%%writefile app.py
import streamlit as st
import numpy as np
import pandas as pd
import sqlite3
conn = sqlite3.connect('student_feedback.db')
c = conn.cursor()
def create_table():
c.execute('CREATE TABLE IF NOT EXISTS feedback(date_submitted DATE, Q1 TEXT, Q2 INTEGER, Q3 INTEGER, Q4 TEXT, Q5 TEXT, Q6 TEXT, Q7 TEXT, Q8 TEXT)')
def add_feedback(date_submitted, Q1, Q2, Q3, Q4, Q5, Q6, Q7, Q8):
c.execute('INSERT INTO feedback (date_submitted,Q1, Q2, Q3, Q4, Q5, Q6, Q7, Q8) VALUES (?,?,?,?,?,?,?,?,?)',(date_submitted,Q1, Q2, Q3, Q4, Q5, Q6, Q7, Q8))
conn.commit()
def main():
st.title("Student Feedback")
d = st.date_input("Today's date",None, None, None, None)
question_1 = st.selectbox('Who was your teacher?',('','Mr Thomson', 'Mr Tang', 'Ms Taylor','Ms Rivas','Mr Hindle','Mr Henderson'))
st.write('You selected:', question_1)
question_2 = st.slider('What year are you in?', 7,13)
st.write('You selected:', question_2)
question_3 = st.slider('Overall, how happy are you with the lesson? (5 being very happy and 1 being very dissapointed)', 1,5,1)
st.write('You selected:', question_3)
question_4 = st.selectbox('Was the lesson fun and interactive?',('','Yes', 'No'))
st.write('You selected:', question_4)
question_5 = st.selectbox('Was the lesson interesting and engaging?',('','Yes', 'No'))
st.write('You selected:', question_5)
question_6 = st.selectbox('Were you content with the pace of the lesson?',('','Yes', 'No'))
st.write('You selected:', question_6)
question_7 = st.selectbox('Did your teacher explore the real-world applications of what you learnt?',('','Yes', 'No'))
st.write('You selected:', question_7)
question_8 = st.text_input('What could have been better?', max_chars=50)
if st.button("Submit feedback"):
create_table()
add_feedback(d, question_1, question_2, question_3, question_4, question_5, question_6, question_7, question_8)
st.success("Feedback submitted")
if __name__ == '__main__':
main()
#@title This last cell would keep the app running. If stoped, the app would be disconnected.
from pyngrok import ngrok
public_url = ngrok.connect(port='8080')
print('Link to web app:')
print (public_url)
!streamlit run --server.port 80 app.py >/dev/null
###Output
Link to web app:
NgrokTunnel: "http://ba5c72c75c07.ngrok.io" -> "http://localhost:80"
|
_source_code/2020-09-13-finding-outliers-in-your-data.ipynb | ###Markdown
Hmm, I guess this looks semi-normal with some skewness to the right. Just to be double sure, let's see some normal data...
###Code
# A box-and-whisker plot should show any outliers clearly
df <- data.frame(Data=as.vector(gold))
ggplot(df, aes(y=Data)) + geom_boxplot()
###Output
Warning message:
“Removed 34 rows containing non-finite values (stat_boxplot).”
###Markdown
That upper whisker is not indicating a clear outlier, which would be a dot past the end of the whisker, but my gut is still telling me that the spike to nearly 600 in the above timeseries plot is an outlier. One great method of finding outliers is using a grubbs test! The only problem is that the grubbs test expects the data to be normally distributed. Let's check out if our data is close to a normal distribution
###Code
# What the heck does normal data look like
random_normal_data <- rnorm(100)
favstats(random_normal_data)
plot.ecdf(random_normal_data, main = 'ecdf(x) of Normally Distributed Data')
qqnorm(random_normal_data, main = 'Quantile-Quantile (QQ) plot of Normally Distributed Data')
qqline(random_normal_data, col = "red", lwd = 5)
legend("bottomright", c("Data Points", "Theoretical Normal"), fill=c("black", "red"))
favstats(gold)
# Does our data look normally distributed?
plot.ecdf(gold, main = 'ecdf(x) of Gold Prices Data')
qqnorm(gold, main = 'Quantile-Quantile (QQ) plot of Gold Prices Data')
qqline(gold, col = "red", lwd = 5)
legend("bottomright", c("Data Points", "Theoretical Normal"), fill=c("black", "red"))
###Output
_____no_output_____
###Markdown
It's important to note that a grubbs test expects normality, and as the data isn't strictly normal there can be some concerns about the validity of the results.
###Code
grubbs.test(gold, type = 10)
###Output
_____no_output_____
###Markdown
Grubbs ResultsSo it looks like we do have an outlier, with a resonable p-value of 0.1965. The important thing here is to ask ourselves if that outlier is an error in the data, or if it is valuable data that needs to be included in our models to make them more realistic to real-world. For the purposes of this example, we're going to simply assume that there was a one-day run on gold that is not likely to occur again and is not representative of our dataset as a whole.
###Code
# Let's go ahead and remove that max value we think is the outlier
new_gold <- gold[-which.max(gold)]
# Let's compare the old summary to the new summary
favstats(gold)
favstats(new_gold)
# Lastly, let's take a look at the timeseries plot again
plot.ts(new_gold, main = "Gold Prices w/o Outlier Data")
grubbs.test(new_gold, type = 10)
###Output
_____no_output_____ |
assignment1/kNN.ipynb | ###Markdown
k-Nearest Neighbor exercise The k-nearest neighbors (KNN) algorithm is a simple supervised machine learning algorithm. kNN assumes that similar things exist in close proximity. In other words, similar things are "near" to each other. In training stage, the kNN classifier takes the training data and simply remembers it. Then, in testing stage, the classifier looks through the training data and finds the *k* training examples that are **nearest** to the new example based on certain metrics. It then assigns the most common class label (among those *k* training examples) to the test example.Mathematically, for a given example $x$, the output of kNN is the class $y$ with the largest probability:$$P(y=j \mid X=x)=\frac{1}{K} \sum_{i \in \mathcal{A}} I\left(y^{(i)}=j\right)$$where $\mathcal{A}$ is the *k* nearest neighbors of $x$ The target of this assignment is to develop a kNN classifier for [MNIST](http://yann.lecun.com/exdb/mnist/) handwritten digit classification. Table of Contents- [1-Packages](1)- [2-Load the Dataset](2)- [3-kNN Classifier](3)- [4-Test the classifier](4)- [5-Test with different k value](5) 1 - PackagesFirst import all the packages needed during this assignment
###Code
import subprocess
import struct
import numpy as np
import os
import matplotlib.pyplot as plt
from collections import Counter
from tqdm import tqdm
%matplotlib inline
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
2 - Load the Dataset
###Code
remote_url = 'http://yann.lecun.com/exdb/mnist/'
files = ('train-images-idx3-ubyte.gz', 'train-labels-idx1-ubyte.gz',
't10k-images-idx3-ubyte.gz', 't10k-labels-idx1-ubyte.gz')
save_path = 'mnist'
os.makedirs(save_path, exist_ok=True)
# Download MNIST dataset
for file in files:
data_path = os.path.join(save_path, file)
if not os.path.exists(data_path):
url = remote_url + file
print(f'Downloading {file} from {url}')
subprocess.call(['wget', '--quiet', '-O', data_path, url])
print(f'Finish downloading {file}')
# Extract zip files
subprocess.call(f'find {save_path}/ -name "*.gz" | xargs gunzip -f', shell=True);
###Output
Downloading train-images-idx3-ubyte.gz from http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Finish downloading train-images-idx3-ubyte.gz
Downloading train-labels-idx1-ubyte.gz from http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Finish downloading train-labels-idx1-ubyte.gz
Downloading t10k-images-idx3-ubyte.gz from http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Finish downloading t10k-images-idx3-ubyte.gz
Downloading t10k-labels-idx1-ubyte.gz from http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Finish downloading t10k-labels-idx1-ubyte.gz
###Markdown
For convenience, images are reshaped to column vector. Data is represented in `np.array` format.
###Code
mnist_prefixs = ['train_images', 'train_labels', 't10k_images', 't10k_labels']
result = dict.fromkeys(mnist_prefixs)
for file in os.listdir(save_path):
with open(os.path.join(save_path, file), 'rb') as f:
prefix = '_'.join(file.split('-')[:2])
if 'labels' in prefix:
magic_num, size = struct.unpack('>II', f.read(8))
result[prefix] = np.fromfile(f, dtype=np.uint8)
elif 'images' in prefix:
magic_num, size, rows, cols = struct.unpack('>IIII', f.read(16))
# reshape to column vector
result[prefix] = np.fromfile(f, dtype=np.uint8).reshape(size, -1) / 255
else:
raise Exception(f'Unexpected filename: {file}')
train_img, train_label, test_img, test_label = (result[key] for key in mnist_prefixs)
# As a sanity check, print out the size of the training and test data
print('Training data shape: ', train_img.shape)
print('Training labels shape: ', train_label.shape)
print('Test data shape: ', test_img.shape)
print('Test labels shape: ', test_label.shape)
# Visualize some examples from the dataset
classes = list(range(0, 10))
num_classes = len(classes)
sample_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(train_label == cls)
idxs = np.random.choice(idxs, sample_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(sample_per_class, num_classes, plt_idx)
img_size = int(np.sqrt(train_img[idx].shape[-1]))
plt.imshow(train_img[idx].reshape(img_size, img_size))
plt.axis('off')
if i == 0:
plt.title(cls)
###Output
_____no_output_____
###Markdown
3 - kNN ClassifierRecall that kNN works by finding the **distance** (or **similarity**) among a query and all the examples in the data. Thus, a metrics of distance (or similarity) between examples is required. In the assignment, we use `L2 distance` as the distance metrics among examples. Other metrices like `L1 distance` or `cosine similarity` are also available, which could be done with slight modification to the code.L2 distance:\begin{align*}d(\mathbf{p}, \mathbf{q})=d(\mathbf{q}, \mathbf{p}) &=\sqrt{\sum_{i=1}^{n}\left(q_{i}-p_{i}\right)^{2}}\end{align*}
###Code
def classify_10(data, label, img, k):
"""MNIST digit classification using kNN algorithm"""
# Calculate L2 distance between given image and all training data
d_1 = np.abs(data - img)
d_2 = d_1 ** 2
d_3 = d_2.sum(axis=1)
# Find out the closest k examples
k_N = Counter(label[d_3.argsort()][:k])
return sorted(k_N, key=lambda x: k_N[x], reverse=True)[0]
def kNN(train_img, train_label, test_img, test_label, k):
error_count = 0
acc_rate = 1.0
prediction = []
pbar = tqdm(enumerate(test_img), total=test_img.shape[0])
for i, img in pbar:
pred = classify_10(train_img, train_label, img, k)
prediction.append(pred)
if pred != test_label[i]:
error_count += 1
acc_rate = 1 - 1.0 * error_count / (i + 1)
pbar.set_postfix_str(f'accuracy: {acc_rate}', refresh=False)
pbar.update(1)
return prediction
###Output
_____no_output_____
###Markdown
4 - Test the classifier
###Code
pred = kNN(train_img, train_label, test_img, test_label, k=3) # Test kNN with k=3
acc = np.mean(pred == test_label)
print('Accuracy: %.6f' % acc)
###Output
100%|██████████| 10000/10000 [35:55<00:00, 4.64it/s, accuracy: 0.9717]
###Markdown
5 - Test with different k value
###Code
# Try to determine the optimal value of k
k_choices = (3, 5, 7, 9)
accuracy = []
for k in k_choices:
pred = kNN(train_img, train_label, test_img, test_label, k=k)
accuracy.append(np.mean(pred == test_label))
print('k = %d; Accuracy: %.6f' % (k, accuracy[-1]))
optimal_k = k_choices[np.array(accuracy).argmax()]
print(f'optimal value of k in {k_choices} is {optimal_k}')
# Plot the accuracy rate with different k value
plt.figure(figsize=(12, 6))
plt.plot(k_choices, accuracy, color='green', marker='o', markersize=9)
plt.title('Accuracy rate on MNIST')
plt.xlabel('K Value')
plt.ylabel('Accuracy rate')
plt.show()
###Output
_____no_output_____ |
notebooks/1.2.1 EDA - Statistical Analysis (numerical).ipynb | ###Markdown
Basic Correlation analysis
###Code
plt.figure(figsize=(12,10))
sns.heatmap(train.corr(method='pearson'), cmap=sns.cm.rocket_r)
###Output
_____no_output_____
###Markdown
Observations (based only on- presented above- heatmap of correlations): 1. GarageYrBlt are strong correlated with YearBuilt -> house in most cases was originally built with garage. - GarageYrBlt with cars and area -> some trend on garage size to be bigger - GarageArea with OverallQual -> the better quality, the bigger, newer garages- OverallQual correlations -> bigger, newer houses- TotalBasementSF correlation with 1stFlrSF -> 1st floor almost same like basement- Bigger BsmtFinSF[1 or 2] gives lower BsmtUnfSF- MoSold hasn't any significant correlation with all variables Sale Price analysis
###Code
# create variable for convenience
price = train['SalePrice']
# auxiliary methods in plot_utils files
# higher order function for convenience
show_log_transform = partial(show_transform_plot, trans_fun=np.log, fit_dist=stats.norm, metrics=[stats.skew, stats.kurtosis])
###Output
_____no_output_____
###Markdown
Price description
###Code
price.describe()
###Output
_____no_output_____
###Markdown
Pirice distribution
###Code
show_log_transform(price)
zscored_price = stats.zscore(np.log(price))
quantil_bound =3.5
print(f'Number of outliers {np.logical_or(zscored_price>quantil_bound, zscored_price<-quantil_bound).sum()}')
print(f'outliers indices : {price[np.logical_or(zscored_price>quantil_bound, zscored_price<-quantil_bound)].index}')
###Output
outliers indices : Int64Index([30, 495, 533, 691, 916, 968, 1182], dtype='int64')
###Markdown
1. SalePrice is right skewed and its distribution has big kurtosis. For the sake of SalePrice being from Gaussian we can apply transformation - Log transformation reduce skewness and kurtosis - Boxcox transformation works also well, however results are similar to log-transform and there is no need to store additional boxcox-transformation parameters- We can assume that in dataset some outlier occured (7 outliers if taking 3.5 quantile as outlier boundary in normalized data) - deleting outlier values will be considered, because we have to keep in mind that in test set some outliers may appear Other countinuous variables analysis
###Code
print('Skewness and kurtosis of numerical variables')
for feature in num_feats:
print(feature, '--------------------------------------------------')
print('skewness', stats.skew(train[feature]))
print('kurtosis', stats.kurtosis(train[feature]),'\n\n')
show_log_transform(train['1stFlrSF'])
show_log_transform(train['GrLivArea'])
is_wood_deck = train['WoodDeckSF']>0
sns.regplot(train[is_wood_deck]['WoodDeckSF'], price[is_wood_deck])
plt.title('WoodDeckSF with SalePrice')
plt.show()
is_remode_add = train['YearRemodAdd']!=train['YearBuilt']
sns.regplot(train[is_remode_add]['YearRemodAdd'], train[is_remode_add]['YearBuilt'])
plt.title('YearRemodAdd with SalePrice')
plt.show()
###Output
c:\users\kuba\appdata\local\programs\python\python36\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
1. 1stFlr and GrLivArea are both right skewed: apply log transformation gives good results at shape of distribution. BoxCox transformation is almost same like log-transform, so we could safely apply both; also is good to set some threshold in skewness and kurtosis and perform transformation on data whose this parameters are above threshold- WoodDeckSF is correlated with SalePrice or other square footages but not so well- Due to heatmap informations about porch aren't correlated to each other- YearRemodAdd starting in 1950 and is same as year built if no remodes is added; most remodel was added after 2000 and in 1950. Distribution looks messy and doesn't fit well neither YearBuilt nor SalePrice - instead of year we can apply indicator 2ndFlrSF
###Code
plt.title('2ndFlrSF', fontsize=16)
sns.distplot(train['2ndFlrSF'])
nonzero_2ndFlrSF = train['2ndFlrSF']>0
sns.regplot(train[nonzero_2ndFlrSF]['1stFlrSF'], train[nonzero_2ndFlrSF]['2ndFlrSF'])
plt.title('nonzero 2ndFlrSF correlation with 1stFlrSF')
ndFlrSF_indicator = train['2ndFlrSF'].apply(lambda x: 1 if x>0 else 0)
plt.title('2ndFlrSF as binary')
sns.boxplot(ndFlrSF_indicator, train['1stFlrSF'])
###Output
_____no_output_____
###Markdown
1. 2ndFlrSF is mostly 0 and normally distributed when greater than 0. Correlation with 1stFlrSF means that house's square footage is equal (aproximately) at each floor - applying indicator (whether 2ndFlr exists in the house) might be reasonable- Houses without 2nd floor are rather bigger than houses with 2nd floor Garage
###Code
sns.distplot(train['GarageArea'])
show_log_transform(train[train['GarageArea']>0]['GarageArea'])
slope, intercept, _, _, _ = stats.linregress(train['GarageCars'],train['GarageArea'])
line = lambda x: slope*x+intercept
_, ax = plt.subplots(1, 2, figsize=(14, 4))
sns.boxplot('GarageCars', 'GarageArea', data=train, ax=ax[0])
ax[0].plot(train['GarageCars'],line(train['GarageCars']))
sns.boxplot('GarageCars', 'SalePrice', data=train, ax=ax[1])
print('4-cars-garage houses num: ', (train['GarageCars']==4).sum())
garage_blt_with_house = train['GarageYrBlt'] == train['YearBuilt']
garage_blt_before_house = train['GarageYrBlt'] < train['YearBuilt']
garage_blt_after_house = train['GarageYrBlt'] > train['YearBuilt']
different_garage_YrBlt = garage_blt_before_house | garage_blt_after_house
ax = plt.subplot(111)
eq =sns.regplot(train[garage_blt_with_house]['GarageYrBlt'], train[garage_blt_with_house]['YearBuilt'], ax=ax)
before = ax.scatter(train[garage_blt_before_house]['GarageYrBlt'], train[garage_blt_before_house]['YearBuilt'], color='red', alpha=.6)
after = ax.scatter(train[garage_blt_after_house]['GarageYrBlt'], train[garage_blt_after_house]['YearBuilt'], color='green', alpha=.6)
ax.legend((before, after), ('built before', 'built after'))
print("Ratio of garages built same time with house: ", garage_blt_with_house.sum()/train.shape[0])
print("Number of garages built before house: ", (train['GarageYrBlt']<train['YearBuilt']).sum())
print("Number of garages built after house: ", (train['GarageYrBlt']>train['YearBuilt']).sum())
###Output
Ratio of garages built same time with house: 0.7458904109589041
Number of garages built before house: 9
Number of garages built after house: 281
###Markdown
1. GarageArea distributon is messy and in order to some peaks isn't good aproximation of normal distribution. In addition to this we see that there are many examples without garage (area = 0). Any transformatioan (log, boxcox) dosen't give us distribution shape improvement- GarageCars are strongly correlated with GarageArea (multicolinearity), except the case there 4 cars garage, where regline doesn't fit so well as in the less car garages - to tackle undesirebale shape of GarageArea distribution we could use only garage cars in the model (and it seems reasonable, since the main function of garage is to park cars there and number of cars determine it's atractiveness and is determined by area)- Apart from fact that 4-cars garages doesn't fit to overall area regline, also their house's price is suprisingly lower then 3 or 2 cars garages houses. However there are only 5 examples of such houses - we can ignore facts about 4-cars-garage houses being something like outlier in model- Almost 75% of garages were built along with house. Most of garages have been built before or after house year built with difference only 1-3 years, so we make assumption that garage year built is equal to house year built. Hence GarageYrBlt is redundant to YearBuilt - we can drop this feature for model Basement
###Code
_, ax = plt.subplots(1, 2, figsize=(14,4))
ax[0].set_title('Dist BsmtFinSF1')
ax[1].set_title('Dist BsmtFinSF2')
sns.distplot(train['BsmtFinSF1'], ax = ax[0])
sns.distplot(train['BsmtFinSF2'], ax = ax[1])
print('FinSF2 when FinSF is 0: ',train['BsmtFinSF2'].where(train['BsmtFinSF1']==0).unique())
sns.distplot(train['BsmtFinSF1'].where(train['BsmtFinSF2']>0).fillna(0))
plt.title('BsmtFinSF1 when BsmtFinSF2>0')
plt.show()
_,ax = plt.subplots(1, 2, figsize=(14, 4))
sns.countplot(train['BsmtFinType1'].fillna('NA'), ax=ax[0])
sns.countplot(train['BsmtFinType2'].fillna('NA'), ax=ax[1])
bmst_fin = train['BsmtFinSF1'] + train['BsmtFinSF2']
bmst_fin_unfin_ratio = ((bmst_fin - train['BsmtUnfSF'])/train['TotalBsmtSF']).fillna(0)
sns.distplot(bmst_fin_unfin_ratio)
plt.title('Dist finished/unfinished bsmt ratio')
plt.show()
###Output
c:\users\kuba\appdata\local\programs\python\python36\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
1. BsmtFinSF2 is mostly equal to 0 and hasn't any significant correlation with SalePrice and any other feature (based on heatmap)2. When there is no first type basement, hence there is no second type basement, but when bsmt type 2 exists some positive SF of bsmt type 1 (however most is 0)3. Most of cases have totally unfinished basement or partially finished/unfinished. Basement of type 2 is almost always unfinshed/low-quality/Average Rec Room, so most of time we have basements not prepared for living Sum of all areas
###Code
TotalSaleSF = pd.Series(train['GrLivArea']+train['TotalBsmtSF']+train['GarageArea'], name='TotalSF')
show_log_transform(TotalSaleSF)
all_SF = TotalSaleSF
sns.scatterplot(all_SF, price)
sns.jointplot(np.log(TotalSaleSF), np.log(price), kind='kde', xlim=(7, 9), ylim=(11,13.5))
###Output
_____no_output_____
###Markdown
1. Total house SF means all area which belongs to house. However even, when transformation is applied, data has still posisitive kurtosis (undesirable high) and using such structural variable must be considered via model selection - total area may be check by replacing all areas, nevertheless it cause vanishing of many essential informations - using total alongside with other SFs will introduce feature with structural multicolinearity into model Other features
###Code
sns.distplot(train['MSSubClass'])
sns.distplot(train['LotFrontage'].dropna())
###Output
c:\users\kuba\appdata\local\programs\python\python36\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
OverallQual and OverallCond
###Code
_, ax = plt.subplots(1, 2, figsize=(14, 4))
sns.boxplot(train['OverallCond'], price, ax=ax[0])
sns.regplot(train['OverallQual'], price, ax=ax[1])
ax[0].set_title('OverallCond vs SalePrice')
ax[1].set_title('OverallQual vs SalePrice')
plt.show()
###Output
c:\users\kuba\appdata\local\programs\python\python36\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
1. OverallCond hasn't linear correation. SalePrice are more spread when increasing OverallCond value; encoding as one-hot could overcome this issue- OverallQual is very strongly linearly correlated with SalePrice Other correlations (multicolinearity) LotFrontage vs. LotArea
###Code
non_null_frontage_idx = train['LotFrontage'].notnull()
print('LotFrontage with LotArea correlation:')
print('Raw values corr: ', stats.pearsonr(train[non_null_frontage_idx]['LotFrontage'], train[non_null_frontage_idx]['LotArea']))
print('Log-transfomed values corr: ', stats.pearsonr(np.log(train[non_null_frontage_idx]['LotFrontage']), np.log(train[non_null_frontage_idx]['LotArea'])))
sns.regplot(np.log(train['LotFrontage']), np.log(train['LotArea']))
###Output
c:\users\kuba\appdata\local\programs\python\python36\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
1. Taking logarithm of this two variables boosting correlation and deacreasing p-value(good)2. The existence of correlation among this variables gives us abiblity to: - imput missing data in LotFrontage (e.g. by regression) or delete LotFrontage and rely only on LotArea feature (LotArea are more correlated to target than LotFrontage) - We can replace this two variables by their sum4. Some values seems to be outliers, that are: LotFrontage>300 and LotArea>200000 (in original data) 1stFlrSF vs TotalBsmtSF
###Code
sns.scatterplot(train['1stFlrSF'], train['TotalBsmtSF'])
###Output
_____no_output_____
###Markdown
1. 1stFlrSF is strongly correlated with TotalBsmtSF, however there are houses without basement - we can replace feature TotalBsmtSF with binary feature indicating whether house containing basement or not (see below), this replacement reduce information about basementSF, but since it's correlated with 1stFlrSF (and other BsmtSF) we won't loose much information Numerical to categorical YearBuilt
###Code
above_yr = train['YearBuilt']>1910
ax = plt.subplot(111)
ax.scatter(train[~above_yr]['YearBuilt'], train[~above_yr]['SalePrice'], color='red', alpha=.6)
sns.regplot(train[above_yr]['YearBuilt'], train[above_yr]['SalePrice'], ax=ax)
#Example of such discretization
YearBuilt_discretized = pd.qcut(train['YearBuilt'], q=4, labels=['retro', 'old', 'medium', 'new'], retbins=True)
# YearBuilt_discretized = pd.qcut(train['YearBuilt'], 7, retbins=True)
YearBuilt_discretized[1]
_, ax = plt.subplots(1, 2, figsize=(14, 4))
sns.distplot(train['YearBuilt'], bins=YearBuilt_discretized[1], ax=ax[0])
ax[0].set_title('Discretized values distribution')
sns.boxplot(YearBuilt_discretized[0], price, ax=ax[1])
ax[1].set_title('Discretized with SalePrice correlation')
###Output
_____no_output_____
###Markdown
1. Since there are many examples of old houses that are not exactly as cheap as regline says, so we can discretize values of YearBuilt. Feasible division will be set during model selection, however it's seems resonable to divide due to regression line (especially very old houses wich are more expensive than supposed to be) - ad-hoc solution (by observing regplot) to division seems to work well, but for each category (escpecially for the oldest houses) there is too much outliers - using qcut (cut continuous by quantilles) we got also good division (4-6 quantilles work the best), however it's also suffers from too much outliers in older houses TotalBsmtSF
###Code
nonzero_TotalBsmtSF_idx = train['TotalBsmtSF']>0
sns.scatterplot(train[nonzero_TotalBsmtSF_idx]['1stFlrSF'], train[nonzero_TotalBsmtSF_idx]['TotalBsmtSF'])
plt.plot(train[~nonzero_TotalBsmtSF_idx]['1stFlrSF'], train[~nonzero_TotalBsmtSF_idx]['TotalBsmtSF'], color='red')
TotalBsmtSF_disc = train['TotalBsmtSF'].apply(lambda x: 0 if x==0 else 1)
sns.boxplot(TotalBsmtSF_disc, price)
###Output
_____no_output_____ |
.ipynb_checkpoints/trolleybus_draw-checkpoint.ipynb | ###Markdown
Отрисовка N маршрутов рандомными цветами
###Code
print("Enter number of routes?")
number = int(input())
routes = np.empty((0,1), str)
Map = folium.Map(location=[55.734503,37.593484], zoom_start = 15, tiles = "CartoDB dark_matter")
for i in range(0, number):
print("Enter number of route")
NumOf = input()
routes = np.append(routes, np.array([[NumOf]]), axis=0)
drawRoute(Map, NumOf)
Map.save(datetime.now().strftime("%H%M_%d%b%Y")+"_map.html")
print("Done")
Map
def delayedCompare():
print("Enter number of NOT delayed routes?")
number = int(input())
routes = np.empty((0, 1), str)
ActualMap = folium.Map(
location=[55.734503, 37.593484], zoom_start=15, tiles="CartoDB dark_matter")
OldMap = folium.Map(location=[55.734503, 37.593484],
zoom_start=15, tiles="CartoDB dark_matter")
def color(): return random.randint(0, 255)
for i in range(0, number):
print("Enter number of NOT delayed route")
NumOf = input()
colorRoute = '#%02X%02X%02X' % (color(), color(), color())
routes = np.append(routes, np.array([[NumOf]]), axis=0)
drawRoute(ActualMap, NumOf, colorRoute)
drawRoute(OldMap, NumOf, colorRoute)
print("Enter number of delayed routes?")
number = int(input())
routes = np.empty((0, 1), str)
for i in range(0, number):
print("Enter number of delayed route")
NumOf = input()
colorRoute = '#%02X%02X%02X' % (color(), color(), color())
routes = np.append(routes, np.array([[NumOf]]), axis=0)
drawRoute(OldMap, NumOf, colorRoute)
ActualMap.save(datetime.now().strftime("%H%M_%d%b%Y")+"_newMap.html")
OldMap.save(datetime.now().strftime("%H%M_%d%b%Y")+"_OldMap.html")
print("Done")
ActualMap
delayedCompare()
data[data.TypeOfTransport == "троллейбус"][data.RouteNumber ==
1933].geoData.to_string(index=False)
troll1 = data[data.TypeOfTransport == "троллейбус"][data.RouteNumber ==
1933].geoData.to_string(index=False)
Map = folium.Map(location=[55.734503,37.593484], zoom_start = 15, tiles = "CartoDB dark_matter")
troll1 = troll1.split('coordinates=[', 1)[1]
troll1 = troll1.split(']]], center=', 1)[0]
troll1 = troll1.replace("[", "")
troll1 = troll1.replace("]", "")
troll1 = troll1.replace(",", "")
troll1 = troll1.split()
ini_array = np.array(troll1)
res = ini_array.astype(np.float)
lon = []
lat = []
for i in range(0, len(res)):
if i % 2:
lon.append(float(res[i]))
else:
lat.append(float(res[i]))
coord = []
def color(): return random.randint(0, 255)
for i in range(0, len(lat)):
coord.append([lat[i], lon[i]])
print(coord)
color1 = '#%02X%02X%02X' % (color(), color(), color())
route_line = folium.PolyLine(
coord,
weight=10,
smoothFactor=0.1,
color=color1).add_to(Map)
folium.CircleMarker(
location=[55.810808, 37.485848],
radius=3,
popup='Конечная у ПС',
color='#3186cc',
fill=True,
fill_color='#3186cc'
).add_to(Map)
Map
Map.save(datetime.now().strftime("%H%M_%d%b%Y")+"_map.html")
Map
Map = folium.Map(location=[55.734503,37.593484], zoom_start = 15, tiles = "CartoDB dark_matter")
coord = [[37.485848, 55.810808], [37.485848, 55.810808]]
route_line = folium.PolyLine(
coord,
weight=10,
smoothFactor=0.1,
color=color1).add_to(Map)
Map
###Output
_____no_output_____ |
M4DS_GROUP2_PROJECT2/M4DS_PROJECT2_group2.ipynb | ###Markdown
Reading the data from the data file.
###Code
data_path = "/content/drive/MyDrive/Colab Notebooks/project2/data_set.data"
headers = ["symboling","normalized-losses","make","fuel-type","aspiration", "num-of-doors","body-style",
"drive-wheels","engine-location","wheel-base", "length","width","height","curb-weight","engine-type",
"num-of-cylinders", "engine-size","fuel-system","bore","stroke","compression-ratio","horsepower",
"peak-rpm","city-mpg","highway-mpg","price"]
df = pd.read_csv(data_path, names = headers)
df.shape
pd.set_option('display.max_rows', 10,'display.max_columns',None)
df.head(204)
###Output
_____no_output_____
###Markdown
Data Cleaning Converting '?' to NaN*Some attributes have some missing elements in some instances , so the uknown values '?' is replaced by NaN*
###Code
pd.set_option('display.max_rows',20)
df.replace('?',np.nan,inplace=True)
miss_data=df.isnull()
display(miss_data.sum())
miss_data_col=["normalized-losses","bore","stroke","horsepower","peak-rpm","price"]
for c in miss_data_col:
avg=df[c].astype("float").mean(axis=0)
df[c].replace(np.nan,avg,inplace=True)
pd.set_option('display.max_rows', 10,'display.max_columns', None)
display(df)
###Output
_____no_output_____
###Markdown
Missing data
###Code
miss_data=df.isnull()
display(miss_data.sum())
###Output
_____no_output_____
###Markdown
*We can see from the above list the attributes having missing data values:** *normalized-losses: **41** missing data values** *num-of-doors:**2** missing data values** *bore:**4** missing data values** *stroke:**4** missing data values** *horsepower:**2** missing data values** *peak-rpm:**2** missing data values** *price:**4** missing data values* Treating Missing Values**Missing data was replaced by mean of group for continous and mode of group for categorical variables.**
###Code
df["num-of-doors"].replace(np.nan,df["num-of-doors"].value_counts().idxmax(),inplace =True )
print(df.isnull().sum())
df[["bore"]] = df[["bore"]].astype("float")
df[["stroke"]] = df[["stroke"]].astype("float")
df[["normalized-losses"]] = df[["normalized-losses"]].astype("int")
df[["price"]] = df[["price"]].astype("float")
df[["peak-rpm"]] = df[["peak-rpm"]].astype("float")
df[["horsepower"]] = df[["horsepower"]].astype("float")
df.info()
###Output
symboling 0
normalized-losses 0
make 0
fuel-type 0
aspiration 0
..
horsepower 0
peak-rpm 0
city-mpg 0
highway-mpg 0
price 0
Length: 26, dtype: int64
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 204 entries, 0 to 203
Data columns (total 26 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 symboling 204 non-null int64
1 normalized-losses 204 non-null int64
2 make 204 non-null object
3 fuel-type 204 non-null object
4 aspiration 204 non-null object
5 num-of-doors 204 non-null object
6 body-style 204 non-null object
7 drive-wheels 204 non-null object
8 engine-location 204 non-null object
9 wheel-base 204 non-null float64
10 length 204 non-null float64
11 width 204 non-null float64
12 height 204 non-null float64
13 curb-weight 204 non-null int64
14 engine-type 204 non-null object
15 num-of-cylinders 204 non-null object
16 engine-size 204 non-null int64
17 fuel-system 204 non-null object
18 bore 204 non-null float64
19 stroke 204 non-null float64
20 compression-ratio 204 non-null float64
21 horsepower 204 non-null float64
22 peak-rpm 204 non-null float64
23 city-mpg 204 non-null int64
24 highway-mpg 204 non-null int64
25 price 204 non-null float64
dtypes: float64(10), int64(6), object(10)
memory usage: 41.6+ KB
###Markdown
Converting categorical data values into numericals
###Code
df["num-of-doors"] = df["num-of-doors"].apply(lambda x: 4 if x == 'four' else 2)
df.replace({'four': 4,'six': 6, 'five': 5, 'three': 3, 'twelve': 12, 'two': 2, 'eight': 8},inplace=True)
###Output
_____no_output_____
###Markdown
Converting string to integer
###Code
for i in ['make','fuel-type','aspiration','body-style','drive-wheels','engine-location','engine-type','fuel-system']:
codes=None
unique=None
#dict_+i = {unique, }
codes, uniques = pd.factorize(df[i])
df[i]=codes
display(df)
df=df.astype("float")
y = df['symboling'].copy()
X = df.drop('symboling', axis=1).copy()
scaler = StandardScaler()
X = scaler.fit_transform(X)
trainX, testX, trainy, testy = train_test_split(X, y, train_size=0.8, random_state=100)
###Output
_____no_output_____
###Markdown
Define model,Compile it ,fit it and evaluate
###Code
model = Sequential()
model.add(Dense(200, input_dim=25, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(1, activation='softmax'))
# compile model
opt = SGD(lr=0.3, momentum=0.9)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=200, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
# plot accuracy during training
pyplot.subplot(212)
pyplot.title('Accuracy')
pyplot.plot(history.history['accuracy'], label='train')
pyplot.plot(history.history['val_accuracy'], label='test')
pyplot.legend()
pyplot.show()
###Output
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/gradient_descent.py:102: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
super(SGD, self).__init__(name, **kwargs)
###Markdown
L1 regularizer
###Code
yl = df['symboling'].copy()
Xl = df.drop('symboling', axis = 1).copy()
Xl = scaler.fit_transform(Xl)
trainXl, testXl, trainyl, testyl = train_test_split(Xl, yl, train_size = 0.8, random_state = 100)
# Defining the model
model = Sequential()
model.add(Dense(100, input_dim = 25, activation='relu', kernel_initializer='he_uniform', kernel_regularizer = tf.keras.regularizers.l1(0.001)))
model.add(Dense(1, activation='softmax', kernel_regularizer = tf.keras.regularizers.l1(0.000001)))
# Compiling the model
opt = SGD(learning_rate = 0.3, momentum = 0.9)
model.compile(loss='categorical_crossentropy', optimizer = opt, metrics=['accuracy'])
history = model.fit(trainXl, trainyl, validation_data=(testXl, testyl), epochs = 200, verbose = 0)
# Evaluating the model
_, train_acc = model.evaluate(trainXl, trainyl, verbose = 0)
_, test_acc = model.evaluate(testXl, testyl, verbose = 0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
###Output
Train: 0.325, Test: 0.341
|
24_discretization.ipynb | ###Markdown
Foundations of Computational Economics 24by Fedor Iskhakov, ANU Optimization through discretization (grid search) [https://youtu.be/LyWRehkzIws](https://youtu.be/LyWRehkzIws)Description: Grid search method and its use cases. - Elementary technique of finding a maximum or minimum of a function - Main advantage: **robust** - works with *nasty* functions - derivative free - approximates **global** optimum - Main disadvantage: everything else - slow - imprecise - terrible in multivariate problems - Why used so much in economics? - objective function may be nasty - **as first step method** in multi-algorithms Algorithm$$f(x) \longrightarrow \max$$1. Take a starting value $ x_0 $, define a region of search, i.e. $ I = (x_0-a,x_0+b) $ 1. Impose on $ I $ a discrete grid consisting of point $ x_i, i \in 1,\dots,n $ 1. Compute $ f(x_i) $ for all $ i $ 1. Return the maximum of $ f(x_i) $ as the result Example$$\max_{x \in \mathbb{R}} f(x) = -x^4 + 2.5x^2 + x + 10$$First order condition leads to the critical points analytitcally:$$\begin{eqnarray}f'(x)=-4x^3 + 5x +1 &=& 0 \\-4x(x^2-1) + x+1 &=& 0 \\(x+1)(-4x^2+4x+1) &=& 0 \\\big(x+1\big)\big(x-\frac{1}{2}-\frac{1}{\sqrt{2}}\big)\big(x-\frac{1}{2}+\frac{1}{\sqrt{2}}\big) &=& 0\end{eqnarray}$$
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = [12, 8]
f = lambda x: -x**4+2.5*x**2+x+10
df = lambda x: -4*x**3+5*x+1
d2f = lambda x: -12*x**2+5
critical_values = [-1.0,0.5 - 1/np.sqrt(2),0.5 + 1/np.sqrt(2)] # analytic
# make a plot of the function and its derivative
xd = np.linspace(-2,2,1000)
plt.plot(xd,f(xd),label='function',c='red')
plt.plot(xd,df(xd),label='derivative',c='darkgrey')
plt.plot(xd,d2f(xd),label='2nd derivative',c='lightgrey')
plt.grid(True)
plt.legend()
for cr in critical_values:
plt.plot([cr,cr],[-6,15],c='k',linestyle=':')
import sys
sys.path.insert(1, './_static/include/') # add path to the modules directory
import optim as o # import our own optimization routines from last several lectures, see optim.py
help(o)
# first, try to optimize with Newton
xs=[]
for x0 in [0.5,-0.5,1.0]: # try different starting values
xs.append( o.newton(df,d2f,x0))
print('Newton converged to: %r'%xs)
# optimization through discretization
def grid_search(fun,bounds=(0,1),ngrid=10):
'''Grid search between given bounds over given number of points'''
grid = np.linspace(*bounds,ngrid)
func = fun(grid)
i = np.argmax(func) # index of the element attaining maximum
return grid[i]
b0,b1 = -2,2 # bounds of region of search
xs = grid_search(fun=f,bounds=(b0,b1),ngrid=10)
cr = critical_values[np.argmin(np.abs(critical_values-xs))]
print('Grid search returned x*=%1.5f (closest to critical point %1.5f, diff=%1.3e)'%(xs,cr,np.abs(xs-cr)))
# check how fast accuracy increases with the number of grid points
data = {'n':[2**i for i in range(20)]}
data['err'] = np.empty(shape=len(data['n']))
for i,n in enumerate(data['n']):
xs = grid_search(fun=f,bounds=(b0,b1),ngrid=n)
cr = critical_values[np.argmin(np.abs(critical_values-xs))]
data['err'][i] = np.abs(xs-cr)
plt.plot(data['n'],data['err'],marker='o')
plt.yscale('log')
###Output
_____no_output_____
###Markdown
More appropriate example- grid search is slow and inaccurate - yet, it picks out the **global** optimum every time - more appropriate example: $$f(x) = \begin{cases}\exp(x+3) \text{ if } x \in (-\infty,-1] \\10x+13 \text{ if } x \in (-1,-0.5] \\75x^3 \text{ if } x \in (-0.5,0.5] \\5 \text{ if } x \in (0.5,1.5] \\\log(x-1.5) \text{ if } x \in (1.5,+\infty) \\\end{cases}$$
###Code
def f(x):
x = np.asarray(x)
if x.size==1:
x = x[np.newaxis] # to be able to process scalars in the same way
res = np.empty(shape=x.shape)
for i,ix in enumerate(x):
if ix<=-1:
res[i] = np.exp(ix+3)
elif -1 < ix <= -0.5:
res[i] = 10*ix+13
elif -0.5 < ix <= 0.5:
res[i] = 75*ix**3
elif 0.5 < ix <= 1.5:
res[i] = 5.0
else:
res[i] = np.log(ix-1.5)
return res
# plot
xd = np.linspace(-2,2,1000)
plt.plot(xd,f(xd),label='function',c='red')
plt.ylim((-10,10))
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Why is this hard*Any function with cases is usually nasty*- kinks are non-differentiable points, trouble for Newton method - discontinuities are troubles for existence of either roots or maximum (think $ 1/x $ which illustrates both cases) - multiple local optima are troubles for non-global methods - regions where the function is completely flat will likely trigger the stopping criterion, trouble for convergence Discretization and grid search may be the only option! Examples of having to work with hard cases- economic model may have discontinuities and/or kinks - estimation procedure may require working with piecewise flat and/or discontinuous functions - the function at hand may be costly to compute or unclear in nature (or subject of the study) - robustness checks over special parameters (categorical variables, assumptions, etc)
###Code
# bounds and the number of points on the grid
bounds, n = (-2,2), 10 # try 20 30 50 500
plt.plot(xd,f(xd),label='function',c='red')
plt.ylim((-10,10))
plt.grid(True)
# vizualize the grid
for x in np.linspace(*bounds,n):
plt.scatter(x,f(x),s=200,marker='|',c='k',linewidth=2)
# solve
xs = grid_search(f,bounds,ngrid=n)
plt.scatter(xs,f(xs),s=500,marker='*',c='w',edgecolor='b',linewidth=2) # mark the solution with a star
plt.show()
###Output
_____no_output_____
###Markdown
Foundations of Computational Economics 24by Fedor Iskhakov, ANU Optimization through discretization (grid search) [https://youtu.be/LyWRehkzIws](https://youtu.be/LyWRehkzIws)Description: Grid search method and its use cases. - Elementary technique of finding a maximum or minimum of a function - Main advantage: **robust** - works with *nasty* functions - derivative free - approximates **global** optimum - Main disadvantage: everything else - slow - imprecise - terrible in multivariate problems - Why used so much in economics? - objective function may be nasty - **as first step method** in multi-algorithms Algorithm$$f(x) \longrightarrow \max$$1. Take a starting value $ x_0 $, define a region of search, i.e. $ I = (x_0-a,x_0+b) $ 1. Impose on $ I $ a discrete grid consisting of point $ x_i, i \in 1,\dots,n $ 1. Compute $ f(x_i) $ for all $ i $ 1. Return the maximum of $ f(x_i) $ as the result Example$$\max_{x \in \mathbb{R}} f(x) = -x^4 + 2.5x^2 + x + 10$$First order condition leads to the critical points analytitcally:$$\begin{eqnarray}f'(x)=-4x^3 + 5x +1 &=& 0 \\-4x(x^2-1) + x+1 &=& 0 \\(x+1)(-4x^2+4x+1) &=& 0 \\\big(x+1\big)\big(x-\frac{1}{2}-\frac{1}{\sqrt{2}}\big)\big(x-\frac{1}{2}+\frac{1}{\sqrt{2}}\big) &=& 0\end{eqnarray}$$
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = [12, 8]
f = lambda x: -x**4+2.5*x**2+x+10
df = lambda x: -4*x**3+5*x+1
d2f = lambda x: -12*x**2+5
critical_values = [-1.0,0.5 - 1/np.sqrt(2),0.5 + 1/np.sqrt(2)] # analytic
# make a plot of the function and its derivative
xd = np.linspace(-2,2,1000)
plt.plot(xd,f(xd),label='function',c='red')
plt.plot(xd,df(xd),label='derivative',c='darkgrey')
plt.plot(xd,d2f(xd),label='2nd derivative',c='lightgrey')
plt.grid(True)
plt.legend()
for cr in critical_values:
plt.plot([cr,cr],[-6,15],c='k',linestyle=':')
import sys
sys.path.insert(1, './_static/include/') # add path to the modules directory
import optim as o # import our own optimization routines from last several lectures, see optim.py
help(o)
# first, try to optimize with Newton
xs=[]
for x0 in [0.5,-0.5,1.0]: # try different starting values
xs.append( o.newton(df,d2f,x0))
print('Newton converged to: %r'%xs)
# optimization through discretization
def grid_search(fun,bounds=(0,1),ngrid=10):
'''Grid search between given bounds over given number of points'''
grid = np.linspace(*bounds,ngrid)
func = fun(grid)
i = np.argmax(func) # index of the element attaining maximum
return grid[i]
b0,b1 = -2,2 # bounds of region of search
xs = grid_search(fun=f,bounds=(b0,b1),ngrid=10)
cr = critical_values[np.argmin(np.abs(critical_values-xs))]
print('Grid search returned x*=%1.5f (closest to critical point %1.5f, diff=%1.3e)'%(xs,cr,np.abs(xs-cr)))
# check how fast accuracy increases with the number of grid points
data = {'n':[2**i for i in range(20)]}
data['err'] = np.empty(shape=len(data['n']))
for i,n in enumerate(data['n']):
xs = grid_search(fun=f,bounds=(b0,b1),ngrid=n)
cr = critical_values[np.argmin(np.abs(critical_values-xs))]
data['err'][i] = np.abs(xs-cr)
plt.plot(data['n'],data['err'],marker='o')
plt.yscale('log')
###Output
_____no_output_____
###Markdown
More appropriate example- grid search is slow and inaccurate - yet, it picks out the **global** optimum every time - more appropriate example: $$f(x) = \begin{cases}\exp(x+3) \text{ if } x \in (-\infty,-1] \\10x+13 \text{ if } x \in (-1,-0.5] \\75x^3 \text{ if } x \in (-0.5,0.5] \\5 \text{ if } x \in (0.5,1.5] \\\log(x-1.5) \text{ if } x \in (1.5,+\infty) \\\end{cases}$$
###Code
def f(x):
x = np.asarray(x)
if x.size==1:
x = x[np.newaxis] # to be able to process scalars in the same way
res = np.empty(shape=x.shape)
for i,ix in enumerate(x):
if ix<=-1:
res[i] = np.exp(ix+3)
elif -1 < ix <= -0.5:
res[i] = 10*ix+13
elif -0.5 < ix <= 0.5:
res[i] = 75*ix**3
elif 0.5 < ix <= 1.5:
res[i] = 5.0
else:
res[i] = np.log(ix-1.5)
return res
# plot
xd = np.linspace(-2,2,1000)
plt.plot(xd,f(xd),label='function',c='red')
plt.ylim((-10,10))
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Why is this hard*Any function with cases is usually nasty*- kinks are non-differentiable points, trouble for Newton method - discontinuities are troubles for existence of either roots or maximum (think $ 1/x $ which illustrates both cases) - multiple local optima are troubles for non-global methods - regions where the function is completely flat will likely trigger the stopping criterion, trouble for convergence Discretization and grid search may be the only option! Examples of having to work with hard cases- economic model may have discontinuities and/or kinks - estimation procedure may require working with piecewise flat and/or discontinuous functions - the function at hand may be costly to compute or unclear in nature (or subject of the study) - robustness checks over special parameters (categorical variables, assumptions, etc)
###Code
# bounds and the number of points on the grid
bounds, n = (-2,2), 10 # try 20 30 50 500
plt.plot(xd,f(xd),label='function',c='red')
plt.ylim((-10,10))
plt.grid(True)
# vizualize the grid
for x in np.linspace(*bounds,n):
plt.scatter(x,f(x),s=200,marker='|',c='k',linewidth=2)
# solve
xs = grid_search(f,bounds,ngrid=n)
plt.scatter(xs,f(xs),s=500,marker='*',c='w',edgecolor='b',linewidth=2) # mark the solution with a star
plt.show()
###Output
_____no_output_____ |
examples/plot_classification.ipynb | ###Markdown
Nearest Neighbors ClassificationSample usage of Nearest Neighbors classification.It will plot the decision boundaries for each class.Hubness reduction seems to give more weight to outliers here. Adapted from https://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
###Code
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import datasets
from hubness import neighbors
n_neighbors = 15
# import some data to play with
iris = datasets.load_iris()
# we only take the first two features. We could avoid this ugly
# slicing by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
for hubness in [None, 'mutual_proximity']:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors,
hubness=hubness,
weights='distance')
clf.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold,
edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, hubness = '%s')"
% (n_neighbors, hubness))
plt.show()
###Output
Automatically created module for IPython interactive environment
|
notebook/provenance_explorer_demo.ipynb | ###Markdown
Changes at state
###Code
@interact
def interactive_form(start_state=range(0,num_state),end_state=range(0,num_state+2)):
#return df.loc[df[column] > x]
return orpe.get_changes_each_state(range(start_state,end_state+1))
#orpe.get_changes_each_state(2)
###Output
_____no_output_____
###Markdown
Column Schema changes at state
###Code
@interact
def interactive_form(start_state=range(0,num_state),end_state=range(0,num_state+2)):
#return df.loc[df[column] > x]
return orpe.get_col_at_state_order(range(start_state,end_state+1))
#orpe.get_col_at_state_order(range(2,5))
###Output
_____no_output_____
###Markdown
Row order at state
###Code
@interact
def interactive_form(state_id=range(0,num_state+2)):
#return df.loc[df[column] > x]
return orpe.get_row_at_state_order(state_id)
#orpe.get_row_at_state_order(5)
###Output
_____no_output_____
###Markdown
Cell History
###Code
@interact
def interactive_form(col="0",row="3"):
#return df.loc[df[column] > x]
return orpe.get_cell_history(int(row),int(col))
###Output
_____no_output_____
###Markdown
Snapshot at state
###Code
@interact
def interactive_form(state=range(0,num_state+2)):
#return df.loc[df[column] > x]
return orpe.get_snapshot_at_state(state)
orpe.get_col_at_state_order(1)
@interact
def interactive_form(x=range(0,num_state),y=range(0,num_state)):
#return df.loc[df[column] > x]
return orpe.get_column_at_state(range(x,y+1))
tt = orpe.get_snapshot_at_state(0)
tt
orpe.get_cell_history(3,2)
orpe.get_column_at_state(1)
orpe.get_col_at_state_order(0)
orpe.get_col_idx_to_logic(8,1)
orpe.get_col_logic_to_idx(8,2)
xx = orpe.get_row_at_state(1)
xx
orpe.get_row_at_state_order(0)
orpe.get_row_logic_to_idx(4,2)
orpe.get_row_idx_to_logic(4,3)
orpe.get_values_at_state(0)
xx.row_pos_id.tolist().index(4)
list(orpe.cursor.execute("select * from col_each_state where state=8"))
names = list(map(lambda x: x[0], orpe.cursor.description))
names
xx = orpe.cursor.execute("select * from col_each_state where state=8")
orpe.get_column_at_state(5)
###Output
_____no_output_____ |
student tutorial.ipynb | ###Markdown
NumPy and Pandas Tutorial HODP Bootcamp Week 4 October 10, 2018 Some Python refreshers . . . - datatypes (strings, integers)- functions- data structures like lists and dictionaries
###Code
lst = [1, "Emma", 5.0, {"name": "Emma", "age": 20}]
# Get the first element of the list
lst[0]
# Get the last element of the list
lst[-1]
# Get all of the keys of the dictionary
for key in lst[-1]:
print(key)
print(lst[-1][key])
# Get all of the values of the dictionary
###Output
_____no_output_____
###Markdown
This week:* Learn how to use Python libraries numpy and pandas to make data analysis easy and efficient* Understand key differences between Python, NumPy, Pandas, and more traditional tools like Google Sheets* Practice your new data science skills! Getting Started
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Python vs. NumPy* Python lists are flexible, but bugs can be tough to find and for-loops to manipulate data can be slow* NumPy arrays have fixed types and functions can be __vectorized__ and operations can be __broadcast__ across arrays
###Code
lst = ["Emma", "Jeffrey", 1, 2] # This is a valid Python list
lst
np_lst = np.array(lst) # Numpy forces them all to be strings
np_lst
for elt in lst:
print(elt + " 4")
for elt in np_lst:
print(elt + " is fun")
###Output
Emma is fun
Jeffrey is fun
1 is fun
2 is fun
###Markdown
Creating NumPy arrays First, we can use ``np.array`` to create arrays from Python lists:
###Code
# integer array:
np.array([1, 4, 2, 5, 3])
###Output
_____no_output_____
###Markdown
Remember that unlike Python lists, NumPy is constrained to arrays that all contain the same type.If types do not match, NumPy will upcast if possible (here, integers are up-cast to floating point):
###Code
np.array([3.14, 4, 2, 3]) # Notice how the elements in the resulting array are all floats
np.array([1, 2, 3, 4], dtype='float32') # You can explicitly set the type with the dtype keyword
###Output
_____no_output_____
###Markdown
Numpy has a bunch of handy built-in functions to generate arrays:
###Code
# Create a length-10 integer array filled with zeros
np.zeros(10, dtype=int)
# Create a 3x5 floating-point array filled with ones
np.ones((3, 5), dtype=float)
# Create an array of five values evenly spaced between 0 and 1
np.linspace(0, 1, 5)
array = np.arange(9).reshape(3,3)
array
###Output
_____no_output_____
###Markdown
We can slice NumPy arrays and index into them using bracket notation:
###Code
array
array[0, 1]
array[:, 2]
array[1, :]
###Output
_____no_output_____
###Markdown
Rule of Thumb: Don't reinvent the wheelGoogle if a function already exists that does what you want So, how is this useful for data analysis? Often when faced with a large amount of data, a first step is to compute summary statistics for the data in question.Perhaps the most common summary statistics are the __mean__ and __standard deviation__, which allow you to summarize the "typical" values in a dataset, but other aggregates are useful as well (the sum, product, median, minimum and maximum, quantiles, etc.). NumPy has fast built-in aggregation functions for working on arrays; we'll discuss and demonstrate some of them here.
###Code
big_array = np.random.rand(1000000)
%timeit -n 10 sum(big_array)
%timeit -n 10 np.sum(big_array)
###Output
97.5 ms ± 8.06 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
591 µs ± 57.7 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Some more handy features of NumPy: One common type of aggregation operation is an aggregate along a row or column. Say you have some data stored in a two-dimensional array:
###Code
M = np.random.random((3, 4))
print(M)
###Output
[[ 0.60645511 0.29349287 0.5432832 0.0800523 ]
[ 0.34077667 0.08264608 0.06142478 0.50616481]
[ 0.7539802 0.14892548 0.0816191 0.52675839]]
###Markdown
By default, each NumPy aggregation function will return the aggregate over the entire array:
###Code
M.min()
###Output
_____no_output_____
###Markdown
But what if you want the min for each row or each column?
###Code
# Find the min of each row
M.min(axis=1)
# Find the min of each column
M.min(axis=0)
###Output
_____no_output_____
###Markdown
Other aggregation functions Most aggregates have a ``NaN``-safe counterpart that computes the result while ignoring missing values, which are marked by the special floating-point ``NaN`` value. The following table provides a list of useful aggregation functions available in NumPy:|Function Name | NaN-safe Version | Description ||-------------------|---------------------|-----------------------------------------------|| ``np.sum`` | ``np.nansum`` | Compute sum of elements || ``np.prod`` | ``np.nanprod`` | Compute product of elements || ``np.mean`` | ``np.nanmean`` | Compute mean of elements || ``np.std`` | ``np.nanstd`` | Compute standard deviation || ``np.var`` | ``np.nanvar`` | Compute variance || ``np.min`` | ``np.nanmin`` | Find minimum value || ``np.max`` | ``np.nanmax`` | Find maximum value || ``np.argmin`` | ``np.nanargmin`` | Find index of minimum value || ``np.argmax`` | ``np.nanargmax`` | Find index of maximum value || ``np.median`` | ``np.nanmedian`` | Compute median of elements || ``np.percentile`` | ``np.nanpercentile``| Compute rank-based statistics of elements || ``np.any`` | N/A | Evaluate whether any elements are true || ``np.all`` | N/A | Evaluate whether all elements are true | Pandas * Pandas is another useful library for data analysis.* While NumPy is really useful for math, it relies on __arrays__ of specific datatypes (ints, floats, etc).* Pandas uses two data structures: `Series` and `DataFrame` that are designed to package lots of different types of data similar to a spreadsheet.* It combines the functionality of Python and NumPy with the ease of use of Google Sheets. Example: House Rankings We will:1. Read in the data2. Manipulate the data into a more useable form3. Analyze the data4. Plot our results Reading in the data It's super easy to use Pandas to read in data from csv files:
###Code
rankings = pd.read_csv("house_rankings_2018.csv")
rankings
###Output
_____no_output_____
###Markdown
And it looks beautiful:
###Code
rankings.set_index("House", inplace=True)
rankings
###Output
_____no_output_____
###Markdown
Manipulating the data It may be useful to also have this data in a NumPy array so we can use some of the NumPy aggregate functions to analyze our data (although Pandas also has its own version of these functions). It's easy to convert between types:
###Code
rankings.values
###Output
_____no_output_____
###Markdown
We can also splice this array to just get the values for the first column or row:
###Code
# The first column
rankings.values[:, 0]
# The first row
rankings.values[0, :]
###Output
_____no_output_____
###Markdown
Analyzing the data First, how many students filled out the survey?
###Code
# TODO
n = rankings.sum(axis=1)[0]
###Output
_____no_output_____
###Markdown
Which house was the most popular? The least popular?
###Code
# Most popular -- TODO
rankings.iloc[:, 0].argmax()
# Least popular -- TODO
rankings.iloc[:, 11].argmax()
print(rankings.iloc[:, 11].idxmax())
print(rankings.iloc[:, 11].max())
###Output
Currier
150
###Markdown
Make a `DataFrame` with the percentage of first place rankings for each house.
###Code
# TODO
newdf = w_rankings.iloc[:, 0] / n * 100
print(newdf)
###Output
House
Adams 3.745318
Cabot 0.936330
Kirkland 3.558052
Mather 3.183521
Quincy 5.243446
Leverett 2.059925
Dunster 8.426966
Currier 2.621723
Eliot 6.928839
Lowell 28.464419
Pforzheimer 1.872659
Winthrop 32.958801
Name: 1, dtype: float64
###Markdown
Make a `DataFrame` with the average ranking for each house. Hint: You could use a `for` loop
###Code
# For loop approach -- TODO
w_rankings = rankings.copy()
w_rankings
for i in range(12):
w_rankings.iloc[:, i] = w_rankings.iloc[:, i] * (i + 1)
print(w_rankings)
###Output
1 2 3 4 5 6 7 8 9 10 11 \
House
Adams 20 60 216 608 925 1584 3283 4800 5994 2800 3872
Cabot 5 52 144 272 175 720 784 1984 3969 11800 17908
Kirkland 19 76 315 800 1775 2268 3528 4480 4536 2400 2904
Mather 17 60 171 400 675 1440 2156 4288 9072 3700 6655
Quincy 28 172 495 1440 1775 2952 3185 2816 1701 1700 1694
Leverett 11 88 360 1168 1900 2916 4606 4224 2916 1800 1331
Dunster 45 268 1017 896 1750 1512 2156 3328 1539 1000 1331
Currier 14 40 144 240 450 684 980 1472 3483 9200 13794
Eliot 37 228 540 1072 1425 2736 2401 2560 3078 2300 1936
Lowell 152 424 567 816 1125 1260 1078 1536 1134 500 847
Pforzheimer 10 84 135 96 400 684 1421 2112 5346 15800 11858
Winthrop 176 584 702 736 975 468 588 576 486 400 484
12
House
Adams 11520
Cabot 13536
Kirkland 4464
Mather 10944
Quincy 576
Leverett 864
Dunster 720
Currier 21600
Eliot 2016
Lowell 1440
Pforzheimer 9072
Winthrop 144
###Markdown
Or you could use Pandas `pd.DataFrame.apply()` to apply a function to your `DataFrame`.
###Code
def f(row):
for i in range(12):
row[i] *= (i + 1)
return row
weighted_rankings = rankings.apply(f, axis=1)
weighted_rankings
mean_rankings = weighted_rankings.sum(axis=1) / n
print(mean_rankings)
mean_rankings.sort_values()
###Output
_____no_output_____ |
M1C (Python)/M1C-Numerical Solutions to Equations I/Bisection Method.ipynb | ###Markdown
Consider a continuous function $f(x)$. Assume there is a root of $f(x) = 0$ in $(a,b)$ (a way to determine this is to see if there is a sign change between $a$ and $b$, and use Intermediate Value Theorem. In other words, we see if $f(a)f(b) 0$, then $f(b)f(c) < 0$, i.e. there is a root in $(c,b)$. Preparation
###Code
%matplotlib inline
%pylab
###Output
Using matplotlib backend: Qt5Agg
Populating the interactive namespace from numpy and matplotlib
###Markdown
Code
###Code
def intbis(f, xrange, max_iteration, tolerance):
# List of Tuples
listofint = [tuple(xrange)]
# Loop
for n in range(max_iteration):
while abs(xrange[1]-xrange[0]) > 2*tolerance:
c = (xrange[0]+xrange[1])/2
# Consider Special Case when the solution is at c.
if f(c)==0:
print('The exact solution is x = {}'.format(c))
break
# General Case
else:
if f(xrange[0])*f(c) > 0:
xrange[0] = c
else:
xrange[1] = c
# Storage
listofint.append(tuple(xrange))
print('The solution lies between {} and {}'.format(xrange[0], xrange[1]))
return listofint
###Output
_____no_output_____
###Markdown
Testing Cell
###Code
# For obtaining estimates
f = lambda x: x**3-2*x-5
y = intbis(f, [2,2.5], 30, 10**(-6))
print(y)
# For seeing accuracy
lowerlim = [xrange[0] for xrange in y]
upperlim = [xrange[1] for xrange in y]
lowerzero = [f(x) for x in lowerlim]
upperzero = [f(x) for x in upperlim]
plot(lowerzero)
plot(upperzero)
###Output
_____no_output_____ |
07_Visualization/Online_Retail/Exercises_with_solutions_code.ipynb | ###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quatity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quatity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/murali0861/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/murali0861/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quatity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quatity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:3: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:2: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seabor graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rt
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(url)
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with more orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quatity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
################
# Grap Section #
################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv('Online_Retail.csv', encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
customers
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FacetGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1000.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
C:\Users\yehan\Anaconda3\envs\ml\lib\site-packages\ipykernel_launcher.py:3: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
C:\Users\yehan\Anaconda3\envs\ml\lib\site-packages\ipykernel_launcher.py:2: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#buckets
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
BONUS: Create your own question and answer it. Which is the most popular product? (product with hight order quantity)
###Code
best_seller_code = online_rt.groupby('StockCode').sum().sort_values('Quantity', ascending=False).head(1).index.values[0]
best_seller_code
best_seller_name = online_rt[online_rt.StockCode == best_seller_code].Description
best_seller_name
###Output
_____no_output_____
###Markdown
What is best seller in each country?
###Code
group_rt = online_rt.groupby(['Country','StockCode']).agg({'Quantity':sum})
sort_group_rt = group_rt['Quantity'].groupby(level=0, group_keys=False)
best_seller_per_country = sort_group_rt.nlargest(1)
#print(best_seller_per_country)
best_seller_per_country_df = pd.DataFrame({
'Country': best_seller_per_country.index.get_level_values(0),
'StockCode': best_seller_per_country.index.get_level_values(1),
})
best_seller_per_country_df = pd.merge(best_seller_per_country_df,online_rt[['StockCode','Description']], on='StockCode',how='inner').drop_duplicates()
best_seller_per_country_df = best_seller_per_country_df.sort_values('Country').reset_index(drop=True)
display(best_seller_per_country_df)
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quatity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quatity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quatity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____
###Markdown
Online Retails Purchase Introduction: Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# set the graphs to show in the jupyter notebook
%matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv). Step 3. Assign it to a variable called online_rtNote: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
###Code
path = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Online_Retail/Online_Retail.csv'
online_rt = pd.read_csv(path, encoding = 'latin1')
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
###Code
# group by the Country
countries = online_rt.groupby('Country').sum()
# sort the value and get the first 10 after UK
countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11]
# create the plot
countries['Quantity'].plot(kind='bar')
# Set the title and labels
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with most orders')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Step 5. Exclude negative Quantity entries
###Code
online_rt = online_rt[online_rt.Quantity > 0]
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
###Code
# groupby CustomerID
customers = online_rt.groupby(['CustomerID','Country']).sum()
# there is an outlier with negative price
customers = customers[customers.UnitPrice > 0]
# get the value of the index and put in the column Country
customers['Country'] = customers.index.get_level_values(1)
# top three countries
top_countries = ['Netherlands', 'EIRE', 'Germany']
# filter the dataframe to just select ones in the top_countries
customers = customers[customers['Country'].isin(top_countries)]
#################
# Graph Section #
#################
# creates the FaceGrid
g = sns.FacetGrid(customers, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
# adds legend
g.add_legend()
###Output
_____no_output_____
###Markdown
Step 7. Investigate why the previous results look so uninformative.This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).(But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.) Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem. Step 7.1.1 Display the first few rows of that DataFrame.
###Code
#This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'.
#It sums all the (non-indexical) columns that have numerical values under each group.
customers = online_rt.groupby(['CustomerID','Country']).sum().head()
#Here's what it looks like:
customers
###Output
_____no_output_____
###Markdown
Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
###Code
customers.UnitPrice.dtype
#So it's 'float64'
#But why did we sum 'UnitPrice', to begin with?
#If 'UnitPrice' wasn't something that we were interested in then it would be OK
#since we wouldn't care whether UnitPrice was being summed or not.
#But we want our graphs to reflect 'UnitPrice'!
#Note that summing up 'UnitPrice' can be highly misleading.
#It doesn't tell us much as to what the customer is doing.
#Suppose, a customer places one order of 1000 items that are worth $1 each.
#Another customer places a thousand orders of 1 item worth $1.
#There isn't much of a difference between what the former and the latter customers did.
#After all, they've spent the same amount of money.
#so we should be careful when we're summing columns. Sometimes we intend to sum just one column
#('Quantity' in this case) and another column like UnitPrice gets ito the mix.
###Output
_____no_output_____
###Markdown
Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
###Code
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
#The result is exactly what we'd suspected. Customer 12346.0 placed
#one giant order, whereas 12347.0 placed a lot of smaller orders.
#So we've identified one potential reason why our plots looked so weird at section 6.
#At this stage we need to go back to the initial problem we've specified at section 6.
#And make it more precise.
###Output
_____no_output_____
###Markdown
Step 7.2 Reinterpreting the initial problem.To reiterate the question that we were dealing with: "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"The question is open to a set of different interpretations.We need to disambiguate.We could do a single plot by looking at all the data from the top 3 countries.Or we could do one plot per country. To keep things consistent with the rest of the exercise,let's stick to the latter oprion. So that's settled.But "top 3 countries" with respect to what? Two answers suggest themselves:Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).This exercise goes for sales volume, so let's stick to that. Step 7.2.1 Find out the top 3 countries in terms of sales volume.
###Code
sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4] #We are excluding UK
top3
###Output
_____no_output_____
###Markdown
Step 7.2.2 Now that we have the top 3 countries, we can focus on the rest of the problem: "Quantity per UnitPrice by CustomerID". We need to unpack that."by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID."Quantity per UnitPrice" is trickier. Here's what we know: *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer. *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner. Step 7.3 Modify, select and plot data Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.We will use this later to figure out an average price per customer.
###Code
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
###Output
_____no_output_____
###Markdown
Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
###Code
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# get the value of the index and put in the column Country
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
###Output
_____no_output_____
###Markdown
Step 7.3.3 Plot
###Code
####################
# Graph Section v 2#
####################
# creates the FaceGrid
g = sns.FacetGrid(plottable, col="Country")
# map over a make a scatterplot
g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
# adds legend
g.add_legend();
###Output
_____no_output_____
###Markdown
Step 7.4 What to do now?We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.But we shouldn't despair!There are two things to realize:1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.So: we should plot the data regardless of `Country` and hopefully see a less scattered graph. Step 7.4.1 Plot the data for each `CustomerID` on a single graph
###Code
grouped = online_rt.groupby(['CustomerID'])
plottable = grouped['Quantity','Revenue'].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
plt.plot()
#Turns out the graph is still extremely skewed towards the axes like an exponential decay function.
###Output
_____no_output_____
###Markdown
Step 7.4.2 Zoom in so we can see that curve more clearly
###Code
grouped = online_rt.groupby(['CustomerID','Country'])
plottable = grouped.agg({'Quantity': 'sum',
'Revenue': 'sum'})
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
# map over a make a scatterplot
plt.scatter(plottable.Quantity, plottable.AvgPrice)
#Zooming in. (I'm starting the axes from a negative value so that
#the dots can be plotted in the graph completely.)
plt.xlim(-40,2000)
plt.ylim(-1,80)
plt.plot()
#And there is still that pattern, this time in close-up!
###Output
_____no_output_____
###Markdown
8. Plot a line chart showing revenue (y) per UnitPrice (x).Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.That is what we are going to do now. 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
###Code
#These are the values for the graph.
#They are used both in selecting data from
#the DataFrame and plotting the data so I've assigned
#them to variables to increase consistency and make things easier
#when playing with the variables.
price_start = 0
price_end = 50
price_interval = 1
#Creating the buckets to collect the data accordingly
buckets = np.arange(price_start,price_end,price_interval)
#Select the data and sum
revenue_per_price = online_rt.groupby(pd.cut(online_rt.UnitPrice, buckets)).Revenue.sum()
revenue_per_price.head()
###Output
_____no_output_____
###Markdown
8.3 Plot.
###Code
revenue_per_price.plot()
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.show()
###Output
_____no_output_____
###Markdown
8.4 Make it look nicer.x-axis needs values. y-axis isn't that easy to read; show in terms of millions.
###Code
revenue_per_price.plot()
#Place labels
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
#Even though the data is bucketed in intervals of 1,
#I'll plot ticks a little bit further apart from each other to avoid cluttering.
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
plt.show()
#Looks like a major chunk of our revenue comes from items worth $0-$3!
###Output
_____no_output_____ |
002_Python_NumPy_Array.ipynb | ###Markdown
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/09_Python_NumPy_Module)** Python NumPy Array: A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.Numpy array is a powerful N-dimensional array object which is in the form of rows and columns. We can initialize NumPy arrays from nested Python lists and access it elements. NumPy Array Types: Load in NumPy Library (remember to pip install numpy first)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Create a NumPy ArraySimplest way to create an array in Numpy is to use Python List
###Code
myPythonList = [2,3,4,5]
myPythonList
###Output
_____no_output_____
###Markdown
To convert python list to a numpy array by using the object **`np.array`**.
###Code
numpy_array_from_list = np.array(myPythonList)
numpy_array_from_list
###Output
_____no_output_____
###Markdown
In practice, there is no need to declare a Python List. The operation can be combined.
###Code
myPythonList1 = np.array([2,3,4,5])
myPythonList1
###Output
_____no_output_____
###Markdown
>**NOTE:** Numpy documentation states use of **`np.ndarray`** to create an array. However, this the recommended methodYou can also create a numpy array from a Tuple Array basics We can initialize numpy arrays from nested Python lists, and access elements using square brackets **`[]`**:
###Code
a = np.array([1,2,3]) # Create a 1D array
print(a)
print(type(a)) # Prints "<class 'numpy.ndarray'>"
b = np.array([[9.0,8.0,7.0],[6.0,5.0,4.0]])
print(b)
# Get Dimension
a.ndim
# Get Shape
b.shape
# Get Size
a.itemsize
# Get Size
b.itemsize
# Get total size
a.nbytes # a.nbytes = a.size * a.itemsize
# Get number of elements
a.size
###Output
_____no_output_____
###Markdown
Summary:
###Code
a = np.array([1, 2, 3]) # Create a 1d array
print(a)
print(type(a)) # Prints "<class 'numpy.ndarray'>"
print(a.shape) # Prints "(3,)"
print(a[0], a[1], a[2]) # Indexing with 3 elements. Prints "1 2 3"
a[0] = 5 # Change an element of the array
print(a) # Prints "[5, 2, 3]"
b = np.array([[1,2,3],[4,5,6]]) # Create a 2d array
print(b)
print(b.shape) # Prints "(2, 3)"
print(b[0, 0], b[0, 1], b[1, 0]) # Prints "1 2 4"
###Output
[1 2 3]
<class 'numpy.ndarray'>
(3,)
1 2 3
[5 2 3]
[[1 2 3]
[4 5 6]]
(2, 3)
1 2 4
###Markdown
Numpy also provides many functions to create arrays:
###Code
import numpy as np
a = np.zeros((2,2)) # numpy.zeros() or np.zeros Python function is used to create a matrix full of zeroes.
print(a) # Prints "[[ 0. 0.]
# [ 0. 0.]]"
b = np.ones((1,2)) # np.ones() function is used to create a matrix full of ones.
print(b) # Prints "[[ 1. 1.]]"
c = np.full((2,2), 7) # Create a constant array
print(c) # Prints "[[ 7. 7.]
# [ 7. 7.]]"
d = np.eye(2) # Create a 2x2 identity matrix
print(d) # Prints "[[ 1. 0.]
# [ 0. 1.]]"
e = np.random.random((2,2)) # Create an array filled with random values
print(e) # Might print "[[ 0.91940167 0.08143941]
# [ 0.68744134 0.87236687]]"
###Output
[[0. 0.]
[0. 0.]]
[[1. 1.]]
[[7 7]
[7 7]]
[[1. 0.]
[0. 1.]]
[[0.73562437 0.24487723]
[0.33931275 0.74965267]]
###Markdown
You can read about other methods of array creation in this **[documentation](https://numpy.org/doc/stable/user/basics.creation.htmlarrays-creation)**. Array indexingNumpy offers several ways to index into arrays and accessing/changing specific elements, rows, columns, etc.**Slicing:** Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:
###Code
# 2D array
a = np.array([[1,2,3,4,5,6,7],[8,9,10,11,12,13,14]])
print(a)
# Get a specific element [row, column]
a[1, 5] # to select element '13' we need row 2 and element 6. Hence r=1, c=5 (index start from 0)
# or a[1,-2]
# Get a specific row
a[0, :] # all columns
# Get a specific column
a[:, 2] # all rows
# Getting a little more fancy [startindex:endindex:stepsize]
a[0, 1:-1:2]
a[1,5] = 20
print(a)
a[:,2] = [1,2]
print(a)
# 3D example
b = np.array([[[1,2],[3,4]],[[5,6],[7,8]]])
print(b)
# Get specific element (work outside in)
b[0,1,1]
# replace
b[:,1,:]
print(b)
b[:,1,:] = [[9,9,9],[8,8]]
print(b)
###Output
[[[1 2]
[3 4]]
[[5 6]
[7 8]]]
###Markdown
Summary:
###Code
import numpy as np
import numpy as np
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print(a)
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
b = a[:2, 1:3]
print(b)
# A slice of an array is a view into the same data, so modifying it
# will modify the original array.
print(a[0, 1]) # Prints "2"
b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]
print(a[0, 1]) # Prints "77"
###Output
[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]]
[[2 3]
[6 7]]
2
77
###Markdown
You can also mix **integer indexing** with **slice indexing**. However, doing so will yield an array of lower rank than the original array. >**Note:** this is quite different from the way that MATLAB handles array slicing:
###Code
import numpy as np
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print(a)
# Two ways of accessing the data in the middle row of the array.
# Mixing integer indexing with slices yields an array of lower rank,
# while using only slices yields an array of the same rank as the
# original array:
row_r1 = a[1, :] # Rank 1 view of the second row of a
row_r2 = a[1:2, :] # Rank 2 view of the second row of a
print(row_r1, row_r1.shape) # Prints "[5 6 7 8] (4,)"
print(row_r2, row_r2.shape) # Prints "[[5 6 7 8]] (1, 4)"
# We can make the same distinction when accessing columns of an array:
col_r1 = a[:, 1]
col_r2 = a[:, 1:2]
print(col_r1, col_r1.shape) # Prints "[ 2 6 10] (3,)"
print(col_r2, col_r2.shape) # Prints "[[ 2]
# [ 6]
# [10]] (3, 1)"
###Output
[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]]
[5 6 7 8] (4,)
[[5 6 7 8]] (1, 4)
[ 2 6 10] (3,)
[[ 2]
[ 6]
[10]] (3, 1)
###Markdown
Integer array indexingWhen you index into numpy arrays using slicing, the resulting array view will always be a subarray of the original array. In contrast, integer array indexing allows you to construct arbitrary arrays using the data from another array. Here is an example:
###Code
import numpy as np
a = np.array([[1,2], [3, 4], [5, 6]])
print(a)
# An example of integer array indexing.
# The returned array will have shape (3,) and
print(a[[0, 1, 2], [0, 1, 0]]) # Prints "[1 4 5]"
# The above example of integer array indexing is equivalent to this:
print(np.array([a[0, 0], a[1, 1], a[2, 0]])) # Prints "[1 4 5]"
# When using integer array indexing, you can reuse the same
# element from the source array:
print(a[[0, 0], [1, 1]]) # Prints "[2 2]"
# Equivalent to the previous integer array indexing example
print(np.array([a[0, 1], a[0, 1]])) # Prints "[2 2]"
###Output
[[1 2]
[3 4]
[5 6]]
[1 4 5]
[1 4 5]
[2 2]
[2 2]
###Markdown
One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix:
###Code
import numpy as np
# Create a new array from which we will select elements
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
print(a) # prints "array([[ 1, 2, 3],
# [ 4, 5, 6],
# [ 7, 8, 9],
# [10, 11, 12]])"
# Create an array of indices
b = np.array([0, 2, 0, 1])
# Select one element from each row of a using the indices in b
print(a[np.arange(4), b]) # Prints "[ 1 6 7 11]"
# Mutate one element from each row of a using the indices in b
a[np.arange(4), b] += 10
print(a) # prints "array([[11, 2, 3],
# [ 4, 5, 16],
# [17, 8, 9],
# [10, 21, 12]])
###Output
[[ 1 2 3]
[ 4 5 6]
[ 7 8 9]
[10 11 12]]
[ 1 6 7 11]
[[11 2 3]
[ 4 5 16]
[17 8 9]
[10 21 12]]
###Markdown
Quiz time
###Code
# Generate matrix:
### 1 2 3 4 5
### 6 7 8 9 10
### 11 12 13 14 15
### 16 17 18 19 20
### 21 22 23 24 25
### 26 27 28 29 30
# Acces
11 12
16 17
# Acces
2
8
14
20
# Acces
4 5
24 25
29 30
###Output
_____no_output_____
###Markdown
Boolean array indexingBoolean array indexing lets you pick out arbitrary elements of an array. Frequently this type of indexing is used to select the elements of an array that satisfy some condition. Here is an example:
###Code
import numpy as np
a = np.array([[1,2], [3, 4], [5, 6]])
print(a)
bool_idx = (a > 2) # Find the elements of a that are bigger than 2;
# this returns a numpy array of Booleans of the same
# shape as a, where each slot of bool_idx tells
# whether that element of a is > 2.
print(bool_idx) # Prints "[[False False]
# [ True True]
# [ True True]]"
# We use boolean array indexing to construct a rank 1 array
# consisting of the elements of a corresponding to the True values
# of bool_idx
print(a[bool_idx]) # Prints "[3 4 5 6]"
# We can do all of the above in a single concise statement:
print(a[a > 2]) # Prints "[3 4 5 6]"
###Output
[[1 2]
[3 4]
[5 6]]
[[False False]
[ True True]
[ True True]]
[3 4 5 6]
[3 4 5 6]
###Markdown
For brevity we have left out a lot of details about numpy array indexing; if you want to know more about Array Indexing you should read this **[documentation](https://numpy.org/doc/stable/reference/arrays.indexing.html)**. Array datatypesEvery numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example:
###Code
a = np.array([1,2,3], dtype='int32') # Create a 1D array with int32 type
print(a)
# Get Type
a.dtype
b = np.array([[9.0,8.0,7.0],[6.0,5.0,4.0]])
print(b)
b.dtype
###Output
[[9. 8. 7.]
[6. 5. 4.]]
###Markdown
You can read all about numpy datatypes in this **[documentation](https://numpy.org/doc/stable/reference/arrays.dtypes.html)**. Summary
###Code
import numpy as np
x = np.array([1, 2]) # Let numpy choose the datatype
print(x.dtype) # Prints "int64"
x = np.array([1.0, 2.0]) # Let numpy choose the datatype
print(x.dtype) # Prints "float64"
x = np.array([1, 2], dtype=np.int64) # Force a particular datatype
print(x.dtype) # Prints "int64"
###Output
int32
float64
int64
###Markdown
Numpy also provides many functions to create arrays:
###Code
# All 0s matrix
np.zeros((2,3))
# All 1s matrix
np.ones((4,2,2), dtype='int32')
# Any other number
np.full((2,2), 99)
# Any other number (full_like)
np.full_like(a, 4)
#or np.full(a.shape, 4)
# Random decimal numbers
np.random.rand(4,2)
#or
#np.random.random_sample(a.shape)
# Random Integer values
np.random.randint(-4,8, size=(3,3))
# The identity matrix
np.identity(5)
# Repeat an array
arr = np.array([[1,2,3]])
r1 = np.repeat(arr,3, axis=0)
print(r1)
###Output
[[1 2 3]
[1 2 3]
[1 2 3]]
###Markdown
Summary:
###Code
import numpy as np
a = np.zeros((2,2)) # Create an array of all zeros
print(a) # Prints "[[ 0. 0.]
# [ 0. 0.]]"
b = np.ones((1,2)) # Create an array of all ones
print(b) # Prints "[[ 1. 1.]]"
c = np.full((2,2), 7) # Create a constant array
print(c) # Prints "[[ 7. 7.]
# [ 7. 7.]]"
d = np.eye(2) # Create a 2x2 identity matrix
print(d) # Prints "[[ 1. 0.]
# [ 0. 1.]]"
e = np.random.random((2,2)) # Create an array filled with random values
print(e) # Might print "[[ 0.91940167 0.08143941]
# [ 0.68744134 0.87236687]]"
###Output
[[0. 0.]
[0. 0.]]
[[1. 1.]]
[[7 7]
[7 7]]
[[1. 0.]
[0. 1.]]
[[0.67635086 0.51041159]
[0.15725797 0.1589645 ]]
###Markdown
You can read about other methods of array creation in this **[documentation](https://numpy.org/doc/stable/user/basics.creation.htmlarrays-creation)**.
###Code
#Generate matrix
# 1 1 1 1 1
# 1 0 0 0 1
# 1 0 9 0 1
# 1 1 1 1 1
output = np.ones((5,5))
print(output)
z = np.zeros((3,3))
z[1,1] = 9
print(z)
output[1:-1,1:-1] = z
print(output)
###Output
[[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]]
[[0. 0. 0.]
[0. 9. 0.]
[0. 0. 0.]]
[[1. 1. 1. 1. 1.]
[1. 0. 0. 0. 1.]
[1. 0. 9. 0. 1.]
[1. 0. 0. 0. 1.]
[1. 1. 1. 1. 1.]]
###Markdown
Be careful when copying arrays!!!
###Code
a = np.array([1,2,3])
a
b = a
#b = a.copy()
b[0] = 100
print(a)
###Output
[100 2 3]
###Markdown
Array mathBasic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:
###Code
a = np.array([1,2,3,4])
print(a)
a + 2
a - 2
a * 2
a / 2
b = np.array([1,0,1,0])
a + b
a ** 2
# Take the sin
np.cos(a)
###Output
_____no_output_____
###Markdown
You can find the full list of mathematical functions provided by numpy in this **[documentation](https://docs.scipy.org/doc/numpy/reference/routines.math.html)**.
###Code
import numpy as np
x = np.array([[1,2],[3,4]], dtype=np.float64)
y = np.array([[5,6],[7,8]], dtype=np.float64)
print(x)
print(y)
# Elementwise sum; both produce the array
# [[ 6.0 8.0]
# [10.0 12.0]]
print(x + y)
print(np.add(x, y))
# Elementwise difference; both produce the array
# [[-4.0 -4.0]
# [-4.0 -4.0]]
print(x - y)
print(np.subtract(x, y))
# Elementwise product; both produce the array
# [[ 5.0 12.0]
# [21.0 32.0]]
print(x * y)
print(np.multiply(x, y))
# Elementwise division; both produce the array
# [[ 0.2 0.33333333]
# [ 0.42857143 0.5 ]]
print(x / y)
print(np.divide(x, y))
# Elementwise square root; produces the array
# [[ 1. 1.41421356]
# [ 1.73205081 2. ]]
print(np.sqrt(x))
###Output
[[1. 2.]
[3. 4.]]
[[5. 6.]
[7. 8.]]
[[ 6. 8.]
[10. 12.]]
[[ 6. 8.]
[10. 12.]]
[[-4. -4.]
[-4. -4.]]
[[-4. -4.]
[-4. -4.]]
[[ 5. 12.]
[21. 32.]]
[[ 5. 12.]
[21. 32.]]
[[0.2 0.33333333]
[0.42857143 0.5 ]]
[[0.2 0.33333333]
[0.42857143 0.5 ]]
[[1. 1.41421356]
[1.73205081 2. ]]
###Markdown
>**Note:** that unlike MATLAB, **`*`** is elementwise multiplication, not matrix multiplication. We instead use the **`dot`** function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. **`dot`** is available both as a function in the numpy module and as an instance method of array objects:
###Code
### Dot product: product of two arrays
f = np.array([1,2])
g = np.array([4,5])
### 1*4+2*5
np.dot(f, g)
import numpy as np
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
v = np.array([9,10])
w = np.array([11, 12])
# Inner product of vectors; both produce 219
print(v.dot(w))
print(np.dot(v, w))
# Matrix / vector product; both produce the rank 1 array [29 67]
print(x.dot(v))
print(np.dot(x, v))
# Matrix / matrix product; both produce the rank 2 array
# [[19 22]
# [43 50]]
print(x.dot(y))
print(np.dot(x, y))
###Output
219
219
[29 67]
[29 67]
[[19 22]
[43 50]]
[[19 22]
[43 50]]
###Markdown
Numpy provides many useful functions for performing computations on arrays; one of the most useful is **`sum`**:
###Code
import numpy as np
x = np.array([[1,2],[3,4]])
print(np.sum(x)) # Compute sum of all elements; prints "10"
print(np.sum(x, axis=0)) # Compute sum of each column; prints "[4 6]"
print(np.sum(x, axis=1)) # Compute sum of each row; prints "[3 7]"
###Output
10
[4 6]
[3 7]
###Markdown
You can find the full list of mathematical functions provided by numpy in this **[documentation](https://numpy.org/doc/stable/reference/routines.math.html)**. Apart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the **`T`** attribute of an array object:
###Code
import numpy as np
x = np.array([[1,2], [3,4]])
print(x) # Prints "[[1 2]
# [3 4]]"
print(x.T) # Prints "[[1 3]
# [2 4]]"
# Note that taking the transpose of a rank 1 array does nothing:
v = np.array([1,2,3])
print(v) # Prints "[1 2 3]"
print(v.T) # Prints "[1 2 3]"
###Output
[[1 2]
[3 4]]
[[1 3]
[2 4]]
[1 2 3]
[1 2 3]
###Markdown
Numpy provides many more functions for manipulating arrays; you can see the full list in the **[documentation](https://numpy.org/doc/stable/reference/routines.array-manipulation.html)**. Matrix MultiplicationThe Numpy **`matmul()`** function is used to return the matrix product of 2 arrays.
###Code
a = np.ones((2,3))
print(a)
b = np.full((3,2), 2)
print(b)
np.matmul(a,b) # matmul() function is used to return the matrix product of 2 arrays.
### Matmul: matruc product of two arrays
h = [[1,2],[3,4]]
i = [[5,6],[7,8]]
### 1*5+2*7 = 19
np.matmul(h, i)
###Output
_____no_output_____
###Markdown
DeterminantLast but not least, if you need to compute the determinant, you can use **`np.linalg.det()`**. Note that numpy takes care of the dimension.
###Code
# Find the determinant
c = np.identity(3)
np.linalg.det(c)
## Determinant 2*2 matrix
5*8-7*6
np.linalg.det(i)
## Reference docs (https://docs.scipy.org/doc/numpy/reference/routines.linalg.html)
# Determinant
# Trace
# Singular Vector Decomposition
# Eigenvalues
# Matrix Norm
# Inverse
# Etc...
###Output
_____no_output_____
###Markdown
StatisticsNumPy has quite a few useful statistical functions for finding minimum, maximum, percentile standard deviation and variance, etc from the given elements in the array. The functions are explained as follows − Numpy is equipped with the robust statistical function as listed below:| Function | Numpy ||:----: |:---- || **`Min`** | **np.min()** | | **`Max`** | **np.max()** | | **`Mean`** | **np.mean()** | | **`Median`** | **np.median()** | | **`Standard deviation`** | **np.std()** |
###Code
# Consider the following Array
import numpy as np
normal_array = np.random.normal(5, 0.5, 10)
print(normal_array)
# Example:Statistical function
### Min
print(np.min(normal_array))
### Max
print(np.max(normal_array))
### Mean
print(np.mean(normal_array))
### Median
print(np.median(normal_array))
### Sd
print(np.std(normal_array))
stats = np.array([[1,2,3],[4,5,6]])
stats
np.min(stats)
np.max(stats, axis=1)
np.sum(stats, axis=0)
###Output
_____no_output_____
###Markdown
BroadcastingBroadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.For example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this:
###Code
import numpy as np
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = np.empty_like(x) # Create an empty matrix with the same shape as x
# Add the vector v to each row of the matrix x with an explicit loop
for i in range(4):
y[i, :] = x[i, :] + v
# Now y is the following
# [[ 2 2 4]
# [ 5 5 7]
# [ 8 8 10]
# [11 11 13]]
print(y)
###Output
[[ 2 2 4]
[ 5 5 7]
[ 8 8 10]
[11 11 13]]
###Markdown
This works; however when the matrix **`x`** is very large, computing an explicit loop in Python could be slow. Note that adding the vector **`v`** to each row of the matrix **`x`** is equivalent to forming a matrix **`vv`** by stacking multiple copies of **`v`** vertically, then performing elementwise summation of **`x`** and **`vv`**. We could implement this approach like this:
###Code
import numpy as np
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other
print(vv) # Prints "[[1 0 1]
# [1 0 1]
# [1 0 1]
# [1 0 1]]"
y = x + vv # Add x and vv elementwise
print(y) # Prints "[[ 2 2 4
# [ 5 5 7]
# [ 8 8 10]
# [11 11 13]]"
###Output
[[1 0 1]
[1 0 1]
[1 0 1]
[1 0 1]]
[[ 2 2 4]
[ 5 5 7]
[ 8 8 10]
[11 11 13]]
###Markdown
Numpy broadcasting allows us to perform this computation without actually creating multiple copies of **`v`**. Consider this version, using broadcasting:
###Code
import numpy as np
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = x + v # Add v to each row of x using broadcasting
print(y) # Prints "[[ 2 2 4]
# [ 5 5 7]
# [ 8 8 10]
# [11 11 13]]"
###Output
[[ 2 2 4]
[ 5 5 7]
[ 8 8 10]
[11 11 13]]
###Markdown
**Explanation:** The line **`y = x + v`** works even though **`x`** has shape **(4, 3)** and **`v`** has shape **(3,)** due to broadcasting; this line works as if **`v`** actually had shape **(4, 3)**, where each row was a copy of **`v`**, and the sum was performed elementwise. Broadcasting two arrays together follows these rules:* If the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.* The two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.* The arrays can be broadcast together if they are compatible in all dimensions.* After broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.* In any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimensionIf this explanation does not make sense, try reading the explanation from this **[documentation](https://numpy.org/doc/stable/user/basics.broadcasting.html)** or this **[explanation](http://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc)**.Functions that support broadcasting are known as universal functions. You can find the list of all universal functions in this **[documentation](https://numpy.org/doc/stable/reference/ufuncs.htmlavailable-ufuncs)**. Here are some applications of broadcasting:
###Code
import numpy as np
# Compute outer product of vectors
v = np.array([1,2,3]) # v has shape (3,)
w = np.array([4,5]) # w has shape (2,)
# To compute an outer product, we first reshape v to be a column
# vector of shape (3, 1); we can then broadcast it against w to yield
# an output of shape (3, 2), which is the outer product of v and w:
# [[ 4 5]
# [ 8 10]
# [12 15]]
print(np.reshape(v, (3, 1)) * w)
# Add a vector to each row of a matrix
x = np.array([[1,2,3], [4,5,6]])
# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),
# giving the following matrix:
# [[2 4 6]
# [5 7 9]]
print(x + v)
# Add a vector to each column of a matrix
# x has shape (2, 3) and w has shape (2,).
# If we transpose x then it has shape (3, 2) and can be broadcast
# against w to yield a result of shape (3, 2); transposing this result
# yields the final result of shape (2, 3) which is the matrix x with
# the vector w added to each column. Gives the following matrix:
# [[ 5 6 7]
# [ 9 10 11]]
print((x.T + w).T)
# Another solution is to reshape w to be a column vector of shape (2, 1);
# we can then broadcast it directly against x to produce the same
# output.
print(x + np.reshape(w, (2, 1)))
# Multiply a matrix by a constant:
# x has shape (2, 3). Numpy treats scalars as arrays of shape ();
# these can be broadcast together to shape (2, 3), producing the
# following array:
# [[ 2 4 6]
# [ 8 10 12]]
print(x * 2)
###Output
[[ 4 5]
[ 8 10]
[12 15]]
[[2 4 6]
[5 7 9]]
[[ 5 6 7]
[ 9 10 11]]
[[ 5 6 7]
[ 9 10 11]]
[[ 2 4 6]
[ 8 10 12]]
###Markdown
Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible. Arrays reorganizing `asarray()`The **`asarray()`** function is used when you want to convert an input to an array. The input could be a lists, tuple, ndarray, etc.**Syntax:**```pythonnumpy.asarray(data, dtype=None, order=None)[source]```* **`data`**: Data that you want to convert to an array* **`dtype`**: This is an optional argument. If not specified, the data type is inferred from the input data* **`Order`**: Default is **`C`** which is an essential row style. Other option is **`F`** (Fortan-style)
###Code
# Consider the following 2-D matrix with four rows and four columns filled by 1
import numpy as np
a = np.matrix(np.ones((4,4)))
###Output
_____no_output_____
###Markdown
If you want to change the value of the matrix, you cannot. The reason is, it is not possible to change a copy.
###Code
np.array(a)[2]=3
print(a) # value won't change in result
###Output
[[1. 1. 1. 1.]
[1. 1. 1. 1.]
[1. 1. 1. 1.]
[1. 1. 1. 1.]]
###Markdown
Matrix is immutable. You can use asarray if you want to add modification in the original array. Let's see if any change occurs when you want to change the value of the third rows with the value 2
###Code
np.asarray(a)[2]=2 # np.asarray(A): converts the matrix A to an array
print(a)
###Output
[[1. 1. 1. 1.]
[1. 1. 1. 1.]
[2. 2. 2. 2.]
[1. 1. 1. 1.]]
###Markdown
`arange()`The **`arange()`** is an inbuilt numpy function that returns an ndarray object containing evenly spaced values within a defined interval. For instance, you want to create values from 1 to 10; you can use **`arange()`** function.**Syntax:**```pythonnumpy.arange(start, stop,step) ```* **`start`**: Start of interval* **`stop`**: End of interval* **`step`**: Spacing between values. Default step is 1
###Code
# Example 1:
import numpy as np
np.arange(1, 11)
###Output
_____no_output_____
###Markdown
If you want to change the step, you can add a third number in the parenthesis. It will change the step.
###Code
# Example 2:
import numpy as np
np.arange(1, 14, 4)
###Output
_____no_output_____
###Markdown
Reshape DataIn some occasions, you need to reshape the data from wide to long. You can use the reshape function for this. **Syntax:** ```pythonnumpy.reshape(a, newShape, order='C')```* **`a: Array`** that you want to reshape* **`newShape`**: The new desires shape* **`order`**: Default is **`C`** which is an essential row style.
###Code
import numpy as np
e = np.array([(1,2,3), (4,5,6)])
print(e)
e.reshape(3,2)
before = np.array([[1,2,3,4],[5,6,7,8]])
print(before)
after = before.reshape((2,3))
print(after)
###Output
[[1 2 3 4]
[5 6 7 8]]
###Markdown
Flatten DataWhen you deal with some neural network like convnet, you need to flatten the array. You can use **`flatten()`**.**Syntax:** ```pythonnumpy.flatten(order='C')```* **`a: Array`** that you want to reshape* **`newShape`**: The new desires shape* **`order`**: Default is **`C`** which is an essential row style.
###Code
e.flatten()
###Output
_____no_output_____
###Markdown
What is hstack?With hstack you can appened data horizontally. This is a very convinient function in Numpy. Lets study it with an example:
###Code
## Horitzontal Stack
import numpy as np
f = np.array([1,2,3])
g = np.array([4,5,6])
print('Horizontal Append:', np.hstack((f, g)))
# Horizontal stack
h1 = np.ones((2,4))
h2 = np.zeros((2,2))
np.hstack((h1,h2))
###Output
_____no_output_____
###Markdown
What is vstack?With vstack you can appened data vertically. Lets study it with an example:
###Code
## Vertical Stack
import numpy as np
f = np.array([1,2,3])
g = np.array([4,5,6])
print('Vertical Append:', np.vstack((f, g)))
# Vertically stacking vectors
v1 = np.array([1,2,3,4])
v2 = np.array([5,6,7,8])
np.vstack([v1,v2,v1,v2])
###Output
_____no_output_____
###Markdown
Generate Random NumbersTo generate random numbers for Gaussian distribution use:**Syntax:**```pythonnumpy.random.normal(loc, scale, size)```* **`loc`**: the mean. The center of distribution* **`scale`**: standard deviation.* **`size`**: number of returns
###Code
## Generate random nmber from normal distribution
normal_array = np.random.normal(5, 0.5, 10)
print(normal_array)
###Output
[5.72953035 5.8753296 4.09489662 5.67868944 5.04104088 3.95532062
5.41815566 4.89365465 5.25280107 4.94067196]
###Markdown
LinspaceLinspace gives evenly spaced samples.**Syntax:**```pythonnumpy.linspace(start, stop, num, endpoint)```* **`start`**: Start of sequence* **`stop`**: End of sequence* **`num`**: Number of samples to generate. Default is 50* **`endpoint`**: If **`True`** (default), stop is the last value. If **`False`**, stop value is not included.
###Code
# Example: For instance, it can be used to create 10 values from 1 to 5 evenly spaced.
import numpy as np
np.linspace(1.0, 5.0, num=10)
###Output
_____no_output_____
###Markdown
If you do not want to include the last digit in the interval, you can set endpoint to **`False`**
###Code
np.linspace(1.0, 5.0, num=5, endpoint=False)
###Output
_____no_output_____
###Markdown
LogSpaceLogSpace returns even spaced numbers on a log scale. Logspace has the same parameters as **`np.linspace`**.**Syntax:**```pythonnumpy.logspace(start, stop, num, endpoint)```* **`start`**: Start of sequence* **`stop`**: End of sequence* **`num`**: Number of samples to generate. Default is 50* **`endpoint`**: If **`True`** (default), stop is the last value. If **`False`**, stop value is not included.
###Code
# Example:
np.logspace(3.0, 4.0, num=4)
###Output
_____no_output_____
###Markdown
Finaly, if you want to check the memory size of an element in an array, you can use **`.itemsize`**
###Code
x = np.array([1,2,3], dtype=np.complex128)
x.itemsize
###Output
_____no_output_____
###Markdown
Miscellaneous Load Data from Fileyou can download the "data.txt" from **[here](https://github.com/milaan9/09_Python_NumPy_Module/blob/main/data.txt)**
###Code
filedata = np.genfromtxt('data.txt', delimiter=',')
filedata = filedata.astype('int32') # you can also change type to 'int64'
print(filedata)
###Output
[[ 1 13 21 11 196 75 4 3 34 6 7 8 0 1 2 3 4 5]
[ 3 42 12 33 766 75 4 55 6 4 3 4 5 6 7 0 11 12]
[ 1 22 33 11 999 11 2 1 78 0 1 2 9 8 7 1 76 88]]
###Markdown
Boolean Masking and Advanced Indexing
###Code
filedata >50
print(filedata)
filedata[filedata >50] # '[]' will display the value of data point from the dataset
print(filedata)
np.any(filedata > 50, axis = 0) # axis=0 refers to columns and axis=1 refers to rows in this dataset
print(filedata)
np.all(filedata > 50, axis = 0) # '.all' refers to all the data points in row/column (based on axis=0 or axis=1).
print(filedata)
(((filedata > 50) & (filedata < 100)))
print(filedata)
(~((filedata > 50) & (filedata < 100))) # '~' means not
### You can index with a list in NumPy
a = np.array([1,2,3,4,5,6,7,8,9])
a [[1,2,8]] #indexes
###Output
_____no_output_____ |
problem_set_1_Bayesian_Decision_Theory.ipynb | ###Markdown
**Source**https://www.youtube.com/watch?v=azXCzI57Yfchttps://keras.io/guides/training_with_built_in_methods/https://www.youtube.com/watch?v=5gLarqG8p4shttps://colah.github.io/posts/2014-10-Visualizing-MNIST/https://www.youtube.com/watch?v=u5VCZBUNOcAhttps://www.youtube.com/watch?v=4R7mA_AJxK8https://stackoverflow.com/questions/3584805/in-matplotlib-what-does-the-argument-mean-in-fig-add-subplot111https://www.youtube.com/watch?v=r-vYJqcFxBI **Introduction**The MNIST database (Modified National Institute of Standards and Technology database[1]) is a large database of handwritten digits that is commonly used for training various image processing systems.The MNIST database contains 60,000 training images and 10,000 testing images. Importing mnist from keras and other libraries that would help to accomplish the discriminant analysis Problem_set_1 Task-1 Importing dataset from keras and using mnist to get the trained and test images
###Code
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist
import math
###Output
_____no_output_____
###Markdown
Loading the datasets in to training and testing datasets which consists of training images and test images and there are 60000 training images of 28,28 pixels each and y is a group of 1,2,3....9, also converting them into an array of 28 x 28 using numpy.
###Code
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print('Train', len(x_train), len(y_train))
print('Test', (x_test.shape, y_test.shape))
###Output
Train 60000 60000
Test ((10000, 28, 28), (10000,))
###Markdown
Concatenating the two lists, initializing ten list and concatenating y_train with them. Basically combining labels with image.
###Code
c1,c2,c3,c4,c5,c6,c7,c8,c9,c0 = [],[],[],[],[],[],[],[],[],[]
concatenate_list = [c0,c1,c2,c3,c4,c5,c6,c7,c8,c9]
#a list that will hold the value of seperated 60000 matrices to the respective categories matched with respective images.
#the list concatenate_list will hold 60000 values in 10 columns.
for i in range (60000):
label = y_train[i]
concatenate_list[label].append(np.array(x_train[i]))
#print(len(concatenate_list))
###Output
_____no_output_____
###Markdown
testing np.sum to figure out how it is different from sum done using for loop
###Code
concatenate_list[0][0].shape
test = [1,2,3,4]
sum_t2 = 0
sum_test = np.sum(test,0,dtype=np.float32)
for i in range(0,len(test)):
sum_t2 = sum_t2 + test[i]
print("for loop sum", sum_t2)
print("numpy sum", sum_test)
print(len(test))
###Output
for loop sum 10
numpy sum 10.0
4
###Markdown
Finding the elementwise mean and standard deviation to obtain mean image matrix and standard deviation image matrix.
###Code
def elementwise_mean(e_m):
sum = np.sum(e_m,0,dtype=np.float32)
mean = sum/len(e_m)
return mean
def elementwise_sd(e_m,mean):
sum_d = 0
for i in range (0,len(e_m)):
sum_d = sum_d + ((e_m[i]-mean)**2)
standard_deviation = sum_d/len(e_m)
standard_deviation = np.sqrt(standard_deviation)
return standard_deviation
###Output
_____no_output_____
###Markdown
Plotting the image matrix.
###Code
%matplotlib inline
mean_list = []
standard_deviation_list = []
for i in range(0,10):
mean = elementwise_mean(concatenate_list[i])
standard_deviation = elementwise_sd(concatenate_list[i],mean)
mean_list.append(mean)
standard_deviation_list.append(standard_deviation)
for j in range(0,10):
fig = plt.figure()
a = fig.add_subplot(1, 2, 1) #forming 1x2 grid 1st subplot
imgplot = plt.imshow(mean_list[j],cmap='gray') #plotting mean
a.set_title('Mean')
a = fig.add_subplot(1, 2, 2) #filling 2nd subplot in the grid with standard deviation
imgplot = plt.imshow(standard_deviation_list[j],cmap='gray')
a.set_title('Standard Deviation')
###Output
_____no_output_____
###Markdown
Problem_set_1, Task_2 Classify the images in the testing dataset using 0-1 loss function and Bayesian Decision Rule and report the performance.
###Code
def model(x_test, y_test, mean_list):
categories_prob = {}
for i in range(0,10):
categories_prob[i] = y_test.tolist().count(i)
d = 2 # dimension
pred = []
img_mean = 0
g = {}
for j in range(0,10):
img_mean = img_mean + ((x_test[j]-mean_list[j]))
cov_mean_inv = np.dot(np.linalg.inv(np.cov(mean_list[j]) + (np.identity(x_test) * 0.1)), img_mean)
g[j] = (np.subtract(np.dot(np.reshape(img_mean, (x_test**2, 1)).T, np.reshape(cov_mean_inv, (x_test**2, 1))).flatten()*(-0.5), ((d/2)*(np.log(2*(math.pi))))) + ((0.5)*(np.log(np.linalg.det(np.cov(mean_list[j]) + + (np.identity(x_test) * 0.1))))) + (np.log(categories_prob[j])))
pred.append(max(g.items(), key = operator.itemgetter(1))[0])
return pred
test_pred = model(x_test, y_test, mean_list)
error = 0
real = 0
for k in range(0,y_test.shape[0]):
error = error + ((y_test[k]-test_pred[k]))
if(y_test[k] == test_pred[k]):
real = real+1
accuracy = real/test_pred.shape[0]
Error = error/test_pred.shape[0]
print('Accuracy - ',accuracy*100,'%')
print('Error - ',Error*100,'%')
###Output
Accuracy - 87.3 %
Error - 45.300000000000004 %
|
Homework notebooks/(HW notebooks) netology Machine learning/21. Syntactic analysis and keyword selection/HW1_banki_TM-and-classification1.ipynb | ###Markdown
Домашнее задание по NLP 1 [100 баллов] Классификация по тональности В этом домашнем задании вам предстоит классифицировать по тональности отзывы на банки с сайта banki.ru. [Ссылка на данные](https://drive.google.com/open?id=1CPKtX5HcgGWRpzbWZ2fMCyqgHGgk21l2).Данные содержат непосредственно тексты отзывов, некоторую дополнительную информацию, а также оценку по шкале от 1 до 5. Тексты хранятся в json-ах в массиве responses.Посмотрим на пример отзыва:
###Code
responses[99]
###Output
_____no_output_____
###Markdown
Часть 1. Анализ текстов [40/100]1. Посчитайте количество отзывов в разных городах и на разные банки2. Постройте гистограмы длин слов в символах и в словах (не обязательно)3. Найдите 10 самых частых: * слов * слов без стоп-слов * лемм * существительных4. Постройте кривую Ципфа5. Ответьте на следующие вопросы: * какое слово встречается чаще, "сотрудник" или "клиент"? * сколько раз встречается слова "мошенничество" и "доверие"?6. В поле "rating_grade" записана оценка отзыва по шкале от 1 до 5. Используйте меру $tf-idf$, для того, чтобы найти ключевые слова и биграмы для положительных отзывов (с оценкой 5) и отрицательных отзывов (с оценкой 1)
###Code
data = pd.DataFrame(responses)
data.head(3)
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 153499 entries, 0 to 153498
Data columns (total 10 columns):
author 153479 non-null object
bank_license 153498 non-null object
bank_name 153499 non-null object
city 138325 non-null object
datetime 153499 non-null object
num_comments 153499 non-null int64
rating_grade 88658 non-null float64
rating_not_checked 153499 non-null bool
text 153499 non-null object
title 153499 non-null object
dtypes: bool(1), float64(1), int64(1), object(7)
memory usage: 10.7+ MB
###Markdown
1. Посчитайте количество отзывов в разных городах и на разные банки
###Code
# База городов России и не только
# https://habr.com/ru/post/21949/
# http://download.geonames.org/export/dump/
# https://github.com/Legostaev/contry_region_city/
ct = pd.read_csv('/Users/aleksandr/Desktop/rocid.csv (copy 3)/city.csv', sep=';', encoding='cp1251')
ct.head()
X_df1, Y_df2 = data.city, ct.name
speech_recognition = X_df1.to_list()
claim_list = Y_df2.to_list()
import Levenshtein
def n_sort(x=claim_list, y=speech_recognition):
l = Levenshtein.distance
c = []
for i in y:
b = sorted({r: l(i, r) for r in x}.items(), key = lambda x: x[1])[0]
c.append(
['Ввели: "{}" - Скорее всего имели ввиду: "{}" - Колличество min подборов: "{}"'.format(i, b[0], b[1])]
)
return c
n_sort(claim_list, speech_recognition[0:30])
###Output
_____no_output_____
###Markdown
**Левинштейн хорошо справился с тем что и так неплохо, а вот с более сложными вещами подвел...**\_'Ввели: "г. Фролово (Волгоградская обл.)" - Скорее всего имели ввиду: "Сургут (Самарская обл.)" - Колличество min подборов: "17"'_Далее:- Попробую применить Наташу.Также видно, что все портят записи в скобках возможно стоит избавиться от них, если Наташа даст результат хуже Ливенштейна\В крайнем случае буду урезать датасет, т.к. пожертвовать выбросами куда лучше, чем продолжать с ними.p.s. Датасет весьма неприятный, такого рода опечатки стоит править на стадии сбора данных...
###Code
# from natasha import *
from natasha import LocationExtractor
def extract_city(text):
if isinstance(text, str):
extractor = LocationExtractor()
matches = extractor(text)
if len(matches) > 0:
return matches[0].fact.name
else:
return None
else:
return None
cities = pd.DataFrame(data.city.unique(), columns=['orig_name'])
cities['clean_name'] = cities['orig_name'].apply(extract_city)
cities.head()
on = cities.orig_name.value_counts().sum()
cn = cities.clean_name.value_counts().sum()
print('\n{0}'.format(int(on-cn)))
###Output
1050
###Markdown
**Уже лучше, потеря 1050 выбросов не так плохо.**Добавляем в основной Data Set
###Code
data['clean_city'] = data['city'].replace(cities['orig_name'].tolist(), cities['clean_name'].str.title().tolist())
data.head(3)
###Output
_____no_output_____
###Markdown
Посмотрев более подробно на данные у столбца 'city' присутствуют значения None и как следствие у 'clean_city' тоже, чтобы почистить ds от аномалий отсортируем его по этому признаку.
###Code
df_base = data.copy() # копия
df_isna = data[pd.isna(data.clean_city)] # тут NaN по городам
# данные по городам без пропусков, в рамках данной работы буду брать наиболее очищенные данные пусть и в убыток количеству
df_notna = data[pd.notna(data.clean_city)]
# df_base.city.value_counts(dropna=False)
df_notna.info()
banks = df_notna.groupby(['bank_name']).count()['text'].sort_values(ascending=False).head(10)
rcParams['figure.figsize'] = 8, 6
plt.barh(banks.index[::-1],banks.values[::-1])
plt.xlabel('Количество отзывов по Банкам')
plt.ylabel('Top 10')
plt.show()
cities = df_notna.groupby(['clean_city']).count()['text'].sort_values(ascending=False).head(10)
rcParams['figure.figsize'] = 8, 6
plt.barh(cities.index[::-1],cities.values[::-1])
plt.xlabel('Количество отзывов по Городам')
plt.ylabel('Top 10')
plt.show()
###Output
_____no_output_____
###Markdown
Стоит заметить, что город Москву и Питер стоит считать как аномалию в данных, столицу я бы рассматривал отдельно от всех остальных городов. Также в ds присутствует Time series что добавляет в данные мульти сезонность... опустим в этой работе но я бы брал данные только за крайние 2-3 года. Возможные фичи: год, месяц, день недели, время. 2. Постройте гистограмы длин слов в символах и в словах (не обязательно)
###Code
len_c = df_notna.text.apply(len)
rcParams['figure.figsize'] = 8, 6
len_c[len_c<10000].plot(kind='hist',bins=50)
plt.xlabel('Длины отзывов в символах')
plt.ylabel('')
plt.show()
len_t = df_notna.text.str.split().apply(len)
rcParams['figure.figsize'] = 8, 6
len_t[len_t<2000].plot(kind='hist',bins=50)
plt.xlabel('Длины отзывов в словах')
plt.ylabel('')
plt.show()
###Output
_____no_output_____
###Markdown
3. Найдите 10 самых частых:- слов- слов без стоп-слов- лемм- существительных Слова
###Code
regex = re.compile("[А-Яа-я]+")
def words_only(text, regex=regex):
try:
return " ".join(regex.findall(text))
except:
return ""
df = df_notna.copy()
df['text_tokinized'] = df.text.str.lower().apply(words_only)
from tqdm import tqdm_notebook as tqdm
from collections import Counter
cnt = Counter()
n_types = []
n_tokens = []
tokens = []
for index, row in tqdm(df.iterrows(), total = len(df)):
tokens = row['text_tokinized'].split()
cnt.update(tokens)
n_types.append(len(cnt))
n_tokens.append(sum(list(cnt.values())))
for i in cnt.most_common(10):
print(i)
###Output
_____no_output_____
###Markdown
Cлова без стоп-слов
###Code
from nltk.corpus import stopwords
# import nltk
# nltk.download('stopwords')
mystopwords = stopwords.words('russian') + ['это', 'наш' , 'тыс', 'млн', 'млрд', 'также', 'т', 'д', 'г']
def remove_stopwords(text, mystopwords=mystopwords):
try:
return " ".join([token for token in text.split() if not token in mystopwords])
except:
return ""
df['text_tokinized_stop_worlds'] = df.text_tokinized.str.lower().apply(remove_stopwords)
df.head(3)
cnt = Counter()
n_types = []
n_tokens = []
tokens = []
tokens_all=[]
for index, row in tqdm(df.iterrows(), total = len(df)):
tokens = row['text_tokinized_stop_worlds'].split()
tokens_all+=tokens
cnt.update(tokens)
n_types.append(len(cnt))
n_tokens.append(sum(cnt.values()))
for i in cnt.most_common(10):
print(i)
###Output
_____no_output_____
###Markdown
('г', 61082) - неожиданно. считалось 30+ минут\Далее сделал пересчет с + 'г' в стоп словах банка, банк, банке - Леммы должны улучшить данные Леммы
###Code
from pymorphy2 import MorphAnalyzer
from pymystem3 import Mystem
m = Mystem()
def lemmatize(text, mystem=m):
try:
return "".join(m.lemmatize(text)).strip()
except:
return " "
mystoplemmas = stopwords.words('russian') + ['который','прошлый','сей', 'свой', 'наш', 'мочь', 'г']
def remove_stoplemmas(text, mystoplemmas=mystoplemmas):
try:
return " ".join([token for token in text.split() if not token in mystoplemmas])
except:
return ""
df['lemma'] = df['text_tokinized_stop_worlds'].apply(lemmatize)
df.head(3)
cnt = Counter()
n_types = []
n_tokens = []
tokens = []
tokens_all=[]
for index, row in tqdm(df.iterrows(), total = len(df)):
tokens = row['lemma'].split()
cnt.update(tokens)
n_types.append(len(cnt))
tokens_all+=tokens
n_tokens.append(sum(cnt.values()))
for i in cnt.most_common(10):
print(i)
###Output
_____no_output_____
###Markdown
Существительные
###Code
def to_nouns(text, mystem=m):
m=MorphAnalyzer()
try:
return " ".join([noun for noun in text.split() if m.parse(noun)[0].tag.POS =='NOUN'])
except:
return []
to_nouns(df.lemma.iloc[1])
from multiprocessing import Pool
with Pool() as p:
df['nouns']=p.map(to_nouns,df.lemma)
cnt_noun = Counter()
n_types_noun = []
n_tokens_noun= []
tokens_noun = []
tokens_all_noun=[]
for index, row in tqdm(df.iterrows(), total = len(df)):
tokens = row['nouns'].split()
cnt_noun.update(tokens)
n_types_noun.append(len(cnt))
tokens_all_noun+=tokens
n_tokens_noun.append(sum(cnt.values()))
for i in cnt_noun.most_common(10):
print(i)
###Output
_____no_output_____
###Markdown
4. Постройте кривую Ципфа
###Code
freqs = list(cnt.values())
freqs = sorted(freqs, reverse = True)
fig, ax = plt.subplots(figsize=(12,4))
ax.plot(freqs[:300], range(300))
plt.xlabel('Номер слова')
plt.ylabel('Частота слова')
plt.title('Кривая Ципфа')
plt.show()
# fig, ax = plt.subplots(figsize=(12,4))
# ax.plot(n_tokens,n_types)
# plt.xlabel('Количество токенов')
# plt.ylabel('Число слов')
# plt.title('Кривая Хипса')
# plt.show()
###Output
_____no_output_____
###Markdown
5. Ответьте на следующие вопросы:- какое слово встречается чаще, "сотрудник" или "клиент"?- сколько раз встречается слова "мошенничество" и "доверие"?
###Code
from nltk import FreqDist
Freq_Dist = FreqDist(tokens_all)
print('Слово "сотрудник" встречается -"',Freq_Dist['сотрудник'],'раз')
print('Слово "клиент" встречается -"',Freq_Dist['клиент'],'раз')
###Output
Слово "сотрудник" встречается -" 122619 раз
Слово "клиент" встречается -" 121659 раз
###Markdown
Слов "клиент" > Слов "сотрудник"
###Code
print('Слово "мошенничество" встречается -"',Freq_Dist['мошенничество'],'раз')
print('Слово "доверие" встречается -"',Freq_Dist['доверие'],'раз')
###Output
Слово "мошенничество" встречается -" 3046 раз
Слово "доверие" встречается -" 1884 раз
###Markdown
Слов "мошенничество" > Слов "доверие" 6. В поле "rating_grade" записана оценка отзыва по шкале от 1 до 5. Используйте меру 𝑡𝑓−𝑖𝑑𝑓 , для того, чтобы найти ключевые слова и биграмы для положительных отзывов (с оценкой 5) и отрицательных отзывов (с оценкой 1)
###Code
df['rating_grade'].value_counts()
###Output
_____no_output_____
###Markdown
Сбалансируем выборки
###Code
num=10000
df_sample = df[(df.rating_grade==1)].sample(n=num).copy()
df_sample = df_sample.append(df[(df.rating_grade==5)].sample(n=num))
df_sample.rating_grade.value_counts()
tokens_by_topic = []
for rating in df_sample.rating_grade.unique():
tokens=[]
sample=df_sample[df_sample['rating_grade']==rating]
for i in range(len(sample)):
tokens += sample.lemma.iloc[i].split()
tokens_by_topic.append(tokens)
df_sample.head(3)
###Output
_____no_output_____
###Markdown
Униграммы
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
import numpy as np
tfidf = TfidfVectorizer(analyzer='word', ngram_range=(1,1), min_df = 0)
tfidf_matrix = tfidf.fit_transform([' '.join(tokens) for tokens in tokens_by_topic])
feature_names = tfidf.get_feature_names()
tfidf_ranking_5 = []
tfidf_ranking_1 = []
dense = tfidf_matrix.todense()
text = dense[1].tolist()[0]
phrase_scores = [pair for pair in zip(range(0, len(text)), text) if pair[1] > 0]
sorted_phrase_scores = sorted(phrase_scores, key=lambda t: t[1] * -1)
phrases = []
for phrase, score in [(feature_names[word_id], score) for (word_id, score) in sorted_phrase_scores][:70]:
tfidf_ranking_5.append(phrase)
text = dense[0].tolist()[0]
phrase_scores = [pair for pair in zip(range(0, len(text)), text) if pair[1] > 0]
sorted_phrase_scores = sorted(phrase_scores, key=lambda t: t[1] * -1)
phrases = []
for phrase, score in [(feature_names[word_id], score) for (word_id, score) in sorted_phrase_scores][:70]:
tfidf_ranking_1.append(phrase)
rank = pd.DataFrame({'tfidf_ranking_5': tfidf_ranking_5,'tfidf_ranking_1': tfidf_ranking_1})
rank.head(10)
###Output
_____no_output_____
###Markdown
Убираем пересечения
###Code
rank['tfidf_ranking_5_without_1']=rank.tfidf_ranking_5[~rank.tfidf_ranking_5.isin(rank.tfidf_ranking_1)]
rank['tfidf_ranking_1_without_5']=rank.tfidf_ranking_1[~rank.tfidf_ranking_1.isin(rank.tfidf_ranking_5)]
rank.iloc[:,-2:].dropna()
###Output
_____no_output_____
###Markdown
Биграммы
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
import numpy as np
tfidf = TfidfVectorizer(analyzer='word', ngram_range=(2,2), min_df = 0)
tfidf_matrix = tfidf.fit_transform([' '.join(tokens) for tokens in tokens_by_topic])
feature_names = tfidf.get_feature_names()
tfidf_ranking_rank_is_5 = []
tfidf_ranking_rank_is_1 = []
dense = tfidf_matrix.todense()
text = dense[1].tolist()[0]
phrase_scores = [pair for pair in zip(range(0, len(text)), text) if pair[1] > 0]
sorted_phrase_scores = sorted(phrase_scores, key=lambda t: t[1] * -1)
phrases = []
for phrase, score in [(feature_names[word_id], score) for (word_id, score) in sorted_phrase_scores][:70]:
tfidf_ranking_rank_is_5.append(phrase)
text = dense[0].tolist()[0]
phrase_scores = [pair for pair in zip(range(0, len(text)), text) if pair[1] > 0]
sorted_phrase_scores = sorted(phrase_scores, key=lambda t: t[1] * -1)
phrases = []
for phrase, score in [(feature_names[word_id], score) for (word_id, score) in sorted_phrase_scores][:70]:
tfidf_ranking_rank_is_1.append(phrase)
rankings = pd.DataFrame({'tfidf_ranking_rank_is_5': tfidf_ranking_rank_is_5,'tfidf_ranking_rank_is_1': tfidf_ranking_rank_is_1})
rankings.head(10)
rankings['tfidf_ranking_rank_is_5_without_1']=rankings.tfidf_ranking_rank_is_5[~rankings.tfidf_ranking_rank_is_5.isin(rankings.tfidf_ranking_rank_is_1)]
rankings['tfidf_ranking_rank_is_1_without_5']=rankings.tfidf_ranking_rank_is_1[~rankings.tfidf_ranking_rank_is_1.isin(rankings.tfidf_ranking_rank_is_5)]
rankings.iloc[:,-2:].dropna()
###Output
_____no_output_____
###Markdown
Часть 2. Тематическое моделирование [20/100]1. Постройте несколько тематических моделей коллекции документов с разным числом тем. Приведите примеры понятных (интерпретируемых) тем.2. Найдите темы, в которых упомянуты конкретные банки (Сбербанк, ВТБ, другой банк). Можете ли вы их прокомментировать / объяснить?Эта часть задания может быть сделана с использованием gensim.
###Code
import gensim.corpora as corpora
from gensim.models import ldamodel
texts = [df['lemma'].iloc[i].split() for i in range(len(df))]
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
random.seed(11)
lda = ldamodel.LdaModel(corpus=corpus,
id2word=dictionary,
num_topics=20,
alpha='auto',
eta='auto',
iterations = 20,
passes = 5)
# 20 тем, рандомная выборка 5
lda.show_topics(5)
###Output
_____no_output_____
###Markdown
1. Обмен валют2. Покупка по акции, что-то связанное баллами карты3. Заявка в тех. поддержку банка4. Консультации в банке5. Проблема пастоянного клиента с банком
###Code
lda15 = ldamodel.LdaModel(corpus=corpus,
id2word=dictionary,
num_topics=15,
alpha='auto',
eta='auto',
iterations = 20,
passes = 5)
# 25 тем, рандомная выборка 5
lda15.show_topics(5)
###Output
_____no_output_____
###Markdown
1. Открытие депозита2. Кредит, страховка, что-то связанное с квартирой3. Очередь в банке4. Звонок в банке5. Вопрос клиента сотруднику банка
###Code
lda10 = ldamodel.LdaModel(corpus=corpus,
id2word=dictionary,
num_topics=10,
alpha='auto',
eta='auto',
iterations = 20,
passes = 5)
# 10 тем
lda10.show_topics(5)
###Output
_____no_output_____
###Markdown
1. Заявка на кредитный догово2. Открытие вклада3. Претензия клиента банку сбербанка4. Вопрос в поддержку банка5. Звонок клиента в банк Часть 3. Классификация текстов [40/100]Сформулируем для простоты задачу бинарной классификации: будем классифицировать на два класса, то есть, различать резко отрицательные отзывы (с оценкой 1) и положительные отзывы (с оценкой 5). 1. Составьте обучающее и тестовое множество: выберите из всего набора данных N1 отзывов с оценкой 1 и N2 отзывов с оценкой 5 (значение N1 и N2 – на ваше усмотрение). Используйте ```sklearn.model_selection.train_test_split``` для разделения множества отобранных документов на обучающее и тестовое. 2. Используйте любой известный вам алгоритм классификации текстов для решения задачи и получите baseline. Сравните разные варианты векторизации текста: использование только униграм, пар или троек слов или с использованием символьных $n$-грам. 3. Сравните, как изменяется качество решения задачи при использовании скрытых тем в качестве признаков: * 1-ый вариант: $tf-idf$ преобразование (```sklearn.feature_extraction.text.TfidfTransformer```) и сингулярное разложение (оно же – латентый семантический анализ) (```sklearn.decomposition.TruncatedSVD```), * 2-ой вариант: тематические модели LDA (```sklearn.decomposition.LatentDirichletAllocation```). Используйте accuracy и F-measure для оценки качества классификации. Ниже написан примерный Pipeline для классификации текстов. Эта часть задания может быть сделана с использованием sklearn. Составьте обучающее и тестовое множество: выберите из всего набора данных N1 отзывов с оценкой 1 и N2 отзывов с оценкой 5 (значение N1 и N2 – на ваше усмотрение). Используйте sklearn.model_selection.train_test_split для разделения множества отобранных документов на обучающее и тестовое.
###Code
# df_sample.to_csv('sample.csv', index=False)
df_sample1 = pd.read_csv('/Users/aleksandr/Downloads/nlp-netology-master/sample.csv')
df_sample1.head(3)
df_sample1.info()
df_sample1.rating_grade.value_counts()
df_sample1.columns
X = df_sample1['lemma'].values
y = df_sample1.rating_grade.values
X.shape, y.shape
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.feature_extraction import DictVectorizer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.decomposition import TruncatedSVD, LatentDirichletAllocation
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score, classification_report, confusion_matrix
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
###Output
_____no_output_____
###Markdown
Используйте любой известный вам алгоритм классификации текстов для решения задачи и получите baseline. Сравните разные варианты векторизации текста: использование только униграм, пар или троек слов или с использованием символьных 𝑛 -грам. (Ниже написан примерный Pipeline для классификации текстов).
###Code
# from sklearn.pipeline import Pipeline
# from sklearn.ensemble import RandomForestClassifier
# !!! На каждом этапе Pipeline нужно указать свои параметры
# 1-ый вариант: tf-idf + LSI
# 2-ой вариант: LDA
# clf = Pipeline([
# ('vect', CountVectorizer(analyzer = 'char', ngram_range={4,6})),
# ('clf', RandomForestClassifier()),
# ])
# clf = Pipeline([
# ('vect', CountVectorizer()),
# ('tfidf', TfidfTransformer()),
# ('tm', TruncatedSVD()),
# ('clf', RandomForestClassifier())
# ])
clf_countvectorized = Pipeline(
[('vect', CountVectorizer()),
('clf', LogisticRegression())]
)
params_cntv = {
'vect__analyzer': ['word','char'],
'vect__max_df': (0.5, 0.75, 1.0),
'vect__ngram_range': ((1, 1), (2, 2), (3, 3)),
'clf__C': np.logspace(-3,3,7),
'clf__penalty': ['l1','l2']
}
scores=['accuracy', 'f1']
grid_cntv = GridSearchCV(
clf_countvectorized,
param_grid=params_cntv,
cv=3,
scoring=scores,
refit=scores[0],
n_jobs=-1,
verbose=1
)
grid_cntv.fit(X_train, y_train)
# print(grid_cntv.best_estimator_)
print("Best score: %0.3f" % grid_cntv.best_score_)
predictions=grid_cntv.best_estimator_.predict(X_test)
print("Precision: {0:6.2f}".format(precision_score(y_test, predictions, average='macro')))
print("Recall: {0:6.2f}".format(recall_score(y_test, predictions, average='macro')))
print("F1_score: {0:6.2f}".format(f1_score(y_test, predictions, average='macro')))
print("Accuracy: {0:6.2f}".format(accuracy_score(y_test, predictions)))
print(classification_report(y_test, predictions))
labels = grid_cntv.best_estimator_.classes_
sns.heatmap(
data=confusion_matrix(y_test, predictions),
annot=True,
fmt="d",
cbar=False,
xticklabels=labels,
yticklabels=labels
)
plt.title("Confusion matrix")
plt.show()
###Output
_____no_output_____
###Markdown
Сравните, как изменяется качество решения задачи при использовании скрытых тем в качестве признаков:- 1-ый вариант: 𝑡𝑓−𝑖𝑑𝑓 преобразование (sklearn.feature_extraction.text.TfidfTransformer) и сингулярное разложение (оно же – латентый семантический анализ) (sklearn.decomposition.TruncatedSVD),- 2-ой вариант: тематические модели LDA (sklearn.decomposition.LatentDirichletAllocation).Используйте accuracy и F-measure для оценки качества классификации. (Эта часть задания может быть сделана с использованием sklearn). 1-ый вариант 𝑡𝑓−𝑖𝑑𝑓
###Code
clf_tf_idf = Pipeline(
[('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression())]
)
params_tf_idf={
'vect__analyzer': ['word'],
'vect__max_df': (0.5, 0.75, 1.0),
'vect__ngram_range': [(1, 1), (2, 2), (3, 3)],
'tfidf__use_idf': (True, False),
'clf__C': np.logspace(-3, 3, 7),
'clf__penalty': ['l1', 'l2']
}
scores=['accuracy','f1']
grid_tf_idf = GridSearchCV(
clf_tf_idf,
param_grid=params_tf_idf,
cv=3,
scoring=scores,
refit=scores[0],
n_jobs=-1,
verbose=1
)
grid_tf_idf.fit(X_train, y_train)
# print(grid_tf_idf.best_estimator_)
print("Best score: %0.3f" % grid_tf_idf.best_score_)
predictions=grid_tf_idf.best_estimator_.predict(X_test)
print("Precision: {0:6.2f}".format(precision_score(y_test, predictions, average='macro')))
print("Recall: {0:6.2f}".format(recall_score(y_test, predictions, average='macro')))
print("F1_score: {0:6.2f}".format(f1_score(y_test, predictions, average='macro')))
print("Accuracy: {0:6.2f}".format(accuracy_score(y_test, predictions)))
print(classification_report(y_test, predictions))
labels = grid_tf_idf.best_estimator_.classes_
sns.heatmap(
data=confusion_matrix(y_test, predictions),
annot=True,
fmt="d",
cbar=False,
xticklabels=labels,
yticklabels=labels
)
plt.title("Confusion matrix")
plt.show()
###Output
_____no_output_____
###Markdown
Cингулярное разложение
###Code
clf_tf_idf_TruncatedSVD = Pipeline(
[('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('tsvd', TruncatedSVD()),
('clf', LogisticRegression())]
)
params_tf_idf_TruncatedSVD={
'vect__analyzer': ['word'],
'vect__ngram_range': [(1, 1), (2, 2), (3, 3)],
'tsvd__n_components': [5, 10, 25, 50, 100],
'clf__C': np.logspace(-3, 3, 7),
'clf__penalty': ['l1', 'l2']
}
scores=['accuracy','f1']
grid_tf_idf_TruncatedSVD = GridSearchCV(
clf_tf_idf_TruncatedSVD,
param_grid=params_tf_idf_TruncatedSVD,
cv=3,
scoring=scores,
refit=scores[0],
n_jobs=-1,
verbose=1
)
grid_tf_idf_TruncatedSVD.fit(X_train, y_train)
# print(grid_tf_idf_TruncatedSVD.best_estimator_)
print("Best score: %0.3f" % grid_tf_idf_TruncatedSVD.best_score_)
predictions=grid_tf_idf_TruncatedSVD.best_estimator_.predict(X_test)
print("Precision: {0:6.2f}".format(precision_score(y_test, predictions, average='macro')))
print("Recall: {0:6.2f}".format(recall_score(y_test, predictions, average='macro')))
print("F1_score: {0:6.2f}".format(f1_score(y_test, predictions, average='macro')))
print("Accuracy: {0:6.2f}".format(accuracy_score(y_test, predictions)))
print(classification_report(y_test, predictions))
labels = grid_tf_idf_TruncatedSVD.best_estimator_.classes_
sns.heatmap(
data=confusion_matrix(y_test, predictions),
annot=True,
fmt="d",
cbar=False,
xticklabels=labels,
yticklabels=labels
)
plt.title("Confusion matrix")
plt.show()
###Output
_____no_output_____
###Markdown
2-ой вариант LDA
###Code
clf_tf_idf_LDA = Pipeline(
[('vect', CountVectorizer()),
('lda', LatentDirichletAllocation()),
('clf', LogisticRegression())]
)
params_tf_idf_LDA={
'vect__analyzer': ['word'],
'vect__max_df': [0.75],
'vect__ngram_range': [(1, 1)],
'lda__n_components' : [25, 50, 100],
'clf__C': np.logspace(-3, 3, 7),
'clf__penalty': ['l1']
}
scores=['accuracy', 'f1']
grid_tf_idf_LDA = GridSearchCV(
clf_tf_idf_LDA,
param_grid=params_tf_idf_LDA,
cv=3,
scoring=scores,
refit=scores[0],
n_jobs=-1,
verbose=1
)
grid_tf_idf_LDA.fit(X_train, y_train)
# print(grid_tf_idf_LDA.best_estimator_)
print("Best score: %0.3f" % grid_tf_idf_LDA.best_score_)
predictions=grid_tf_idf_LDA.best_estimator_.predict(X_test)
print("Precision: {0:6.2f}".format(precision_score(y_test, predictions, average='macro')))
print("Recall: {0:6.2f}".format(recall_score(y_test, predictions, average='macro')))
print("F1_score: {0:6.2f}".format(f1_score(y_test, predictions, average='macro')))
print("Accuracy: {0:6.2f}".format(accuracy_score(y_test, predictions)))
print(classification_report(y_test, predictions))
labels = grid_tf_idf_LDA.best_estimator_.classes_
sns.heatmap(
data=confusion_matrix(y_test, predictions),
annot=True,
fmt="d",
cbar=False,
xticklabels=labels,
yticklabels=labels
)
plt.title("Confusion matrix")
plt.show()
###Output
_____no_output_____
###Markdown
Итого:
###Code
models=['grid_cntv', 'grid_tf_idf', 'grid_tf_idf_TruncatedSVD', 'grid_tf_idf_LDA']
for model in models:
print(model[5:])
predictions=eval(model).best_estimator_.predict(X_test)
print("f1_score: {0:6.3f}\nAccuracy: {0:6.3f}\n\n".format(
f1_score(y_test, predictions, average='macro'),
accuracy_score(y_test, predictions)))
###Output
cntv
f1_score: 0.948
Accuracy: 0.948
tf_idf
f1_score: 0.952
Accuracy: 0.952
tf_idf_TruncatedSVD
f1_score: 0.940
Accuracy: 0.940
tf_idf_LDA
f1_score: 0.915
Accuracy: 0.915
###Markdown
**𝑡𝑓−𝑖𝑑𝑓 наиболее удачная модель**
###Code
pass
###Output
_____no_output_____ |
notebooks/2020-03-13-starter-notebook.ipynb | ###Markdown
Coronavirus ___Coronavirus COVID-19 (2019-nCoV) COVID-19 Data for South Africa About NotebookThe goal here is to explore data for Coronavirus spread in South Africa, this notebook will be updated as time goes, Site to know more about Coronavirus https://www.who.int/health-topics/coronavirus ___ Load PackagesLet's load packages that we need to achieve the goal above
###Code
import os
import pandas as pd
import seaborn as sns
import networkx as nx
import matplotlib.pyplot as plt
from datetime import datetime
from textwrap import wrap
### NOTE: `conda install basemap`
import conda
conda_file_dir = conda.__file__
conda_dir = conda_file_dir.split('lib')[0]
proj_lib = os.path.join(os.path.join(conda_dir, 'share'), 'proj')
os.environ["PROJ_LIB"] = proj_lib
from mpl_toolkits.basemap import Basemap
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
pd.options.display.max_colwidth = 100
###Output
_____no_output_____
###Markdown
Data ___Let's check the data, we may have multiple files in the data directory
###Code
os.listdir('../data')
###Output
_____no_output_____
###Markdown
We have one CSV file that the data, let's use this data Load data___Let's load data
###Code
df = pd.read_csv('../data/covid19za_timeline_confirmed.csv')
###Output
_____no_output_____
###Markdown
Partial View of Data___Let's see how the data is formatted
###Code
df.head()
###Output
_____no_output_____
###Markdown
Conversion ___Let's convert data to the correct data type, we will only convert `date` at the moment, Age has missing values so that cannot be converted at the moment
###Code
df['date'] = df.apply(lambda x: datetime.strptime(x['date'], '%d-%m-%Y').date(), axis=1)
###Output
_____no_output_____
###Markdown
Age Group___Let's create age group column to use it for further analysis
###Code
bins = [17, 18, 30, 40, 50, 60, 70, 80]
labels = ['0-17', '18-29', '30-39', '40-49', '50-59', '60-69', '70+']
df['age_group'] = pd.cut(df.age, bins, labels = labels, include_lowest = True)
###Output
_____no_output_____
###Markdown
Visualizations___Graphs are better to use and explain, let's viusalize our data
###Code
df.head()
df.gender.value_counts()
def vertical_bar_chart(df, x, y, label, sort, figsize=(13, 9), ascending=True):
"""
This customize vertical bar chart from seaborn(sns as aliased above)
Args:
df: dataframe
x: x-axis column
y: y-axis column
label: string to label the graph
figsize: figure size to make chart small or big
ascending: ascending order from smallest to biggest
sort: which column to sort by
Returns:
None
"""
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=figsize)
#sns.set_color_codes(sns.color_palette(["#0088c0"]))
# Text on the top of each barplot
ax = sns.barplot(x=x, y=y, data=df.sort_values(sort, ascending=ascending),
label=label, color="b", palette=["#0088c0"])
total = df[y].sum()
for p in ax.patches:
ax.annotate(str(format(p.get_height()/total * 100, '.2f')) + '%' + ' (' + str(int(p.get_height())) + ')',
(p.get_x() + p.get_width() / 2., p.get_height()),
ha = 'center', va = 'center',
xytext = (0, 10), textcoords = 'offset points')
y_value=['{:,.0f}'.format(x/total * 100) + '%' for x in ax.get_yticks()]
plt.yticks(list(plt.yticks()[0]) + [10])
ax.set_yticklabels(y_value)
plt.xlabel('')
plt.ylabel('')
sns.despine(left=True, bottom=True)
def horizontal_bar_chart(df, x, y, label, figsize=(16, 16)):
"""
This customize horizontal bar chart from seaborn(sns as aliased above)
Args:
df: dataframe
x: x-axis column
y: y-axis column
label: string to label the graph
figsize: figure size to make chart small or big
Returns:
None
"""
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=figsize)
ax = sns.barplot(x=x, y=y, data=df,
label=label, color="b", palette=["#0088c0"])
total = df.values[:, 1].sum()
for i, v in enumerate(df.values[:, 1]):
ax.text(v + 0.1, i + .25, str(format(v / total * 100, '.2f')) + '% (' + str(v) + ')')
labels = [ '\n'.join(wrap(l, 20)) for l in df.values[:, 0]]
ax.set_yticklabels(labels)
x_value=['{:,.0f}'.format(x/total * 100) + '%' for x in ax.get_xticks()]
plt.xticks(list(plt.xticks()[0]) + [10])
ax.set_xticklabels(x_value)
plt.ylabel('')
plt.xlabel('')
sns.despine(left=True, bottom=True)
def line_graph(df, column, figsize=(12, 8)):
"""
This customize line chart from matplotlib(plt as aliased above)
Args:
df: dataframe
column: x-axis column
label: string to label the graph
figsize: figure size to make chart small or big
Returns:
None
"""
fig, ax = plt.subplots(figsize=figsize)
line_data = df[column].value_counts().reset_index().sort_values(by='index')
line_data.plot(x='index', y=column, style='o-', ax=ax)
plt.xlabel('')
def pie_chart(df, column):
"""
This customize pie chart from matplotlib(plt as aliased above)
Args:
df: dataframe
column: x-axis column
label: string to label the graph
figsize: figure size to make chart small or big
Returns:
None
"""
X = df[column].value_counts()
colors = ['#0088C0', '#82DAFF']
plt.pie(X.values, labels=X.index, colors=colors,
startangle=90,
explode = (0, 0),
textprops={'fontsize': 14},
autopct = '%1.2f%%')
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
___ Age group Let's see age groups that infected by Coronavirus
###Code
vertical_bar_chart(df['age_group'].value_counts().reset_index(), 'index', 'age_group', 'Age distribution', 'index')
plt.title("Covid19 ZA Confirmed Positve Cases Age Distribution, as per 16 March 2020", fontsize=16)
plt.annotate('Based on Coronavirus COVID-19 (2019-nCoV) Data Repository for South Africa [Hosted by DSFSI group at University of Pretoria]',
(0.1, 0.02), xycoords='figure fraction', fontsize=12)
plt.savefig("../visualisation/age_distribution_confirmed_cases.png",
# bbox_inches='tight',
transparent=True,
pad_inches=0, dpi = 200)
###Output
_____no_output_____
###Markdown
Coronavirus infectionThe Age group that is mostly infected is between **30 - 39**, note that from between age of **30 - 59** it is about **~83%** infected ___ Daily infectionsLet's See how the virus is spreading by day
###Code
line_graph(df, 'date')
###Output
_____no_output_____
###Markdown
GenderLet's see which gender has more
###Code
pie_chart(df, 'gender')
###Output
_____no_output_____
###Markdown
**60%** of infected individuals are Males and **40%** are Females___ Province Lets see provinces that are affected by Coronavirus
###Code
horizontal_bar_chart(df['province'].value_counts().reset_index(), 'province', 'index', 'Province', figsize=(12, 4))
###Output
_____no_output_____
###Markdown
**Gauteng** is leading in terms of reported cases, that makes sense since lot of people are using OR Tambo International Airpot___ Country Let's see which country they travel for before coming to South Africa
###Code
df['transmission_type'] = df['transmission_type'].apply(lambda x:
x.replace('Travelled to ', '')\
.replace(' and', ';')\
.replace('Visiting resident of ', '')\
.replace(' travelled to', ';'))
horizontal_bar_chart(df['transmission_type'].value_counts().reset_index(), 'transmission_type',
'index', 'Country', figsize=(12, 6))
###Output
_____no_output_____
###Markdown
NoteLot of people seems like they got infected by a virus in Itali, It seems like they pass through Italy first and there is also lot of interactions happens in Itali and Austria___ Netwok for pathwaysLets plot the network path for their trips(In Progress)
###Code
### Hardcoded For now to simulate use of maps
#### COUNTRIES ####
countries = {'Italy':[41.8719, 12.5674], 'Germany':[51.1657, 10.4515],
'Austria':[47.5162, 14.5501], 'Portugal':[39.3999, 8.2245], 'Switzerland':[46.8182, 8.2275],
'Turkey':[38.9637, 35.2433], 'UK':[55.3781, 3.4360],
'USA':[37.0902, 95.7129], 'Greece':[39.0742, 21.8243]}
#### Provinces in South Africa ####
province = {'KZN':[28.5306, 30.8958], 'GP':[26.2708, 28.1123], 'WC':[33.2278, 21.8569], 'MP':[25.5653, 30.5279]}
for index, row in df.iterrows():
if ';' in row['transmission_type']:
print(row['transmission_type'].split('; '), '->', row['province'])
else:
print(row['transmission_type'],'->', row['province'])
plt.figure(figsize = (30,30))
m = Basemap(projection='gall')
m.fillcontinents(color="#61993b",lake_color="#008ECC")
m.drawmapboundary(fill_color="#5D9BFF")
m.drawcountries(color='#585858',linewidth = 1)
m.drawstates(linewidth = 0.2)
m.drawcoastlines(linewidth=1)
plt.show()
###Output
_____no_output_____ |
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb | ###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
import numpy as np
from sklearn.metrics import accuracy_score
X = df.drop(columns='made_donation_in_march_2007')
y = df['made_donation_in_march_2007']
majority_class = y.mode()[0]
y_pred = np.full(shape=y.shape, fill_value=majority_class)
print('Majority Class Baseline Accuracy:', accuracy_score(y,y_pred))
###Output
Majority Class Baseline Accuracy: 0.7620320855614974
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
from sklearn.metrics import recall_score
print('Majority Class Baseline Recall:', recall_score(y,y_pred))
###Output
Majority Class Baseline Recall: 0.0
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.75, test_size=0.25, random_state=None, shuffle=True)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
import statsmodels.api as sm
model = sm.OLS(y_train, sm.add_constant(X_train))
print(model.fit().summary())
from sklearn.preprocessing import PolynomialFeatures
for degree in [0, 1, 2, 3]:
features = PolynomialFeatures(degree).fit(X_train).get_feature_names(X_train.columns)
print(f'{degree} degree polynomial has {len(features)} features')
print(features)
print('\n')
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LogisticRegression(**kwargs))
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.model_selection import validation_curve
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
degree = [0, 1, 2, 3]
train_score, val_score = validation_curve(
PolynomialRegression(), X_train, y_train,
param_name='polynomialfeatures__degree', param_range=degree,
scoring='recall_macro', cv=3)
plt.plot(degree, np.median(train_score, 1), color='blue', label='training score')
plt.plot(degree, np.median(val_score, 1), color='red', label='validation score')
plt.legend(loc='best')
plt.xlabel('degree');
#1st degree (original features have best validation score)
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.feature_selection import f_classif, SelectKBest
from sklearn.linear_model import LogisticRegression
pipe = make_pipeline(
RobustScaler(),
SelectKBest(f_classif),
LogisticRegression()
)
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
from sklearn.model_selection import GridSearchCV
param_grid = {
'selectkbest__k': [1,2,3,4],
'logisticregression__class_weight': [None,'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gs = GridSearchCV(pipe,param_grid=param_grid, cv=5,
scoring='recall_macro')
gs.fit(X_train,y_train)
pd.DataFrame(gs.cv_results_).sort_values('rank_test_score')
###Output
/usr/local/anaconda3/lib/python3.7/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('split0_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True
warnings.warn(*warn_args, **warn_kwargs)
/usr/local/anaconda3/lib/python3.7/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('split1_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True
warnings.warn(*warn_args, **warn_kwargs)
/usr/local/anaconda3/lib/python3.7/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('split2_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True
warnings.warn(*warn_args, **warn_kwargs)
/usr/local/anaconda3/lib/python3.7/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('split3_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True
warnings.warn(*warn_args, **warn_kwargs)
/usr/local/anaconda3/lib/python3.7/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('split4_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True
warnings.warn(*warn_args, **warn_kwargs)
/usr/local/anaconda3/lib/python3.7/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('mean_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True
warnings.warn(*warn_args, **warn_kwargs)
/usr/local/anaconda3/lib/python3.7/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('std_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True
warnings.warn(*warn_args, **warn_kwargs)
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
print('Best score:',gs.best_score_)
print('Parameters for best score:', gs.best_params_)
# Which features were selected?
selector = gs.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print()
print('Features not selected:')
for name in unselected_names:
print(name)
#Testing model on Test data
test_score = gs.score(X_test, y_test)
print('Test Score:', test_score)
###Output
Test Score: 0.7088985788113695
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
fn = 8
fp = 58
tn = 85
tp = 36
accuracy = (tp+tn)/(tp+tn+fp+fn)
print('Accuracy:',accuracy)
###Output
_____no_output_____
###Markdown
Calculate precision
###Code
precision = tp / (tp+fp)
print('Precision:',precision)
###Output
_____no_output_____
###Markdown
Calculate recall
###Code
recall = tp / (tp+fn)
print('Recall:', recall)
f1 = (2*tp) / ((2*tp)+fp+fn)
print('F1 Score:',f1)
fpr = fn / (fn+tp)
print('False Positive Rate:',fpr)
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3.
###Code
# imports
# general libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from tabulate import tabulate
# sklearn
from sklearn.metrics import roc_curve, auc, accuracy_score, precision_score
from sklearn.metrics import recall_score, f1_score, confusion_matrix
from sklearn.metrics import classification_report, make_scorer
from sklearn.model_selection import GridSearchCV, validation_curve, learning_curve
from sklearn.model_selection import cross_val_score, train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest
from sklearn.linear_model import LogisticRegression
import warnings
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
###Output
_____no_output_____
###Markdown
Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
# create our X and y variables
X = df.drop('made_donation_in_march_2007', axis=1)
y = df['made_donation_in_march_2007']
assert X.isnull().sum().sum() == 0
assert y.isnull().sum().sum() == 0
majority_class = y.mode()[0]
y_pred = [majority_class] * y.shape[0]
print ('Accuracy Score %.3f' % (accuracy_score(y, y_pred)))
###Output
Accuracy Score 0.762
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
print ('Recall Score %.3f' % (recall_score(y, y_pred)))
###Output
Recall Score 0.000
###Markdown
Based on class imbalance alone, we're able to correctly identify the majority of samples. Around 76% of people in the dataset did not give blood.Recall is more nuanced. Scikit-learn automatically assumes that we want the recall score of the *positive* label. In the case above, we're always guessing the negative label. This is obviously very poor performance if we are indeed only concerned about recall score of the positive class.Recall for the negative class, however, is 100%. It's beneficial to look at all performance metrics for our simple majority classifier. This allows us to examine recall scores for different classes and class weightings.
###Code
print ('Classification Report (Majority Classifier Baseline)\n\n', classification_report(y, y_pred))
###Output
Classification Report (Majority Classifier Baseline)
precision recall f1-score support
0 0.76 1.00 0.86 570
1 0.00 0.00 0.00 178
micro avg 0.76 0.76 0.76 748
macro avg 0.38 0.50 0.43 748
weighted avg 0.58 0.76 0.66 748
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.25,
random_state=1,
shuffle=True)
###Output
_____no_output_____
###Markdown
Part 1.3 Feature EngineeringLet's start by taking a look at the training data.
###Code
# lets first take a look at the distribution of our variables
X_train.describe()
X_pairplot = X.copy()
X_pairplot['y'] = y_train
sns.pairplot(X_pairplot, hue='y');
###Output
_____no_output_____
###Markdown
A few takeaways:- People who donated recently seem more likely to donate again- People who donated more frequently or more volume per donation seem more likely to donate again- All of our features are right-skewed, with fairly long tails for long time or frequent donors
###Code
def wrangle(X):
# returns a modified dataframe with additional features
# specific to the blood donations dataset
# include log of all features to account for skewed data
for col in X.columns:
X['log_' + col] = np.log(X[col])
# binary feature if someone hasn't donated in 3 years or more
X['recent_donor'] = X['months_since_last_donation'] < 36
# include donation and volume per donation rates
X['donations_per_month'] = (X['number_of_donations'] / (X['months_since_first_donation'] - X['months_since_last_donation']))
X['volume_per_month'] = (X['total_volume_donated'] / (X['months_since_first_donation'] - X['months_since_last_donation']))
return X
# wrangle test and train data
X_train = wrangle(X_train).replace([np.inf, -np.inf, np.nan],0)
X_test = wrangle(X_test).replace([np.inf, -np.inf, np.nan],0)
###Output
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:7: RuntimeWarning: divide by zero encountered in log
import sys
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:7: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
import sys
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:10: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
# Remove the CWD from sys.path while we load stuff.
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:13: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
del sys.path[0]
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:14: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
pipeline_lr = make_pipeline(StandardScaler(),
SelectKBest(),
LogisticRegression(solver='lbfgs'))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
# initialize parameter grid
param_grid = {'selectkbest__k' : np.arange(1, 10),
'logisticregression__class_weight' : [None, 'balanced'],# {0: 1, 1:10}], <- this will obviously give you recall of 100% for positive class
'logisticregression__C' : [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]}
# create scorer using AVERAGE of recall scores for both classes
recall_scorer = make_scorer(recall_score, pos_label=1, **{'average' : 'macro'})
# grid search for best parameters
gs = GridSearchCV(estimator=pipeline_lr,
param_grid=param_grid,
scoring=recall_scorer,
cv=5,
n_jobs=-1)
gs = gs.fit(X_train, y_train)
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_search.py:841: DeprecationWarning: The default of the `iid` parameter will change from True to False in version 0.22 and will be removed in 0.24. This will change numeric results when test-set sizes are unequal.
DeprecationWarning)
###Markdown
**Please note** I'm using a modified recall score function above, not simply recall for the positive label. Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
print ('Best CV Recall Score (Training Data): %.3f' % gs.best_score_)
print ('\nFinal Hyperparameter Values:\n')
print (tabulate([[param, gs.best_params_[param]] for param in gs.best_params_], headers=['Hyperparameter', 'Optimal Value']))
###Output
Best CV Recall Score (Training Data): 0.669
Final Hyperparameter Values:
Hyperparameter Optimal Value
-------------------------------- ---------------
logisticregression__C 0.0001
logisticregression__class_weight balanced
selectkbest__k 1
###Markdown
If we're *only* trying to optimize recall score for the positive label, it's pretty easy to modify logistic regression class weights to ensure recall is 100%. Unsuprisingly, given we're optimizing for average recall score between classes, the optimal class weight is 'balanced'.Let's look at some learning and validation curves for our training size and regularization parameter. Some Fun Learning and Validation Curves
###Code
# examines how our model learns as it sees more samples
train_sizes, train_scores, test_scores =\
learning_curve(estimator=pipeline_lr,
X=X_train,
y=y_train,
scoring=recall_scorer,
train_sizes=np.linspace(0.1,1.0,10))
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_mean,
color='blue', marker='o',
markersize=5, label='training recall')
plt.fill_between(train_sizes,
train_mean + train_std,
train_mean - train_std,
alpha=0.15, color='blue')
plt.plot(train_sizes, test_mean,
color='green', marker='s',
markersize=5, label='validation recall')
plt.fill_between(train_sizes,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='green')
plt.grid()
plt.xlabel('Number of Training Samples')
plt.ylabel('Recall')
plt.legend(loc='lower right')
plt.show()
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_split.py:2053: FutureWarning: You should specify a value for 'cv' instead of relying on the default value. The default value will change from 3 to 5 in version 0.22.
warnings.warn(CV_WARNING, FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [8] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [8] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [8] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [8] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
###Markdown
The relationship between training and validation score appears to be stable after around 200 samples are observed.
###Code
c_range = [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
train_scores, test_scores = validation_curve(estimator=pipeline_lr,
X=X_train,
y=y_train,
scoring=recall_scorer,
param_name='logisticregression__C',
param_range=c_range,
cv=10)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(c_range, train_mean,
color='blue', marker='o',
markersize=5, label='training recall')
plt.fill_between(c_range,
train_mean + train_std,
train_mean - train_std,
alpha=0.15, color='blue')
plt.plot(c_range, test_mean,
color='green', marker='s',
markersize=5, label='validation recall')
plt.fill_between(c_range,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='green')
plt.grid()
plt.xlabel('Parameter C')
plt.ylabel('Recall')
plt.legend(loc='lower right')
plt.ylim([0.4,0.7])
plt.show()
###Output
_____no_output_____
###Markdown
The relationship between training and test scores appears to be stable anywhere after a very small value for C. Final FeaturesWhich features did our model end up selecting?
###Code
selected_mask = gs.best_estimator_.named_steps['selectkbest'].get_support()
print('Features Selected:')
for name in X_train.columns[selected_mask]:
print(name)
print('\nFeatures Not Selected:')
for name in X_train.columns[~selected_mask]:
print(name)
###Output
Features Selected:
log_months_since_last_donation
Features Not Selected:
months_since_last_donation
number_of_donations
total_volume_donated
months_since_first_donation
log_number_of_donations
log_total_volume_donated
log_months_since_first_donation
recent_donor
donations_per_month
volume_per_month
###Markdown
Evaluation on Test DataNow, let's do a final evaluation on our test set, and look at our confusion matrix for the final model.
###Code
print ('Recall Score (Test Data): %.3f' % gs.score(X_test, y_test))
###Output
Recall Score (Test Data): 0.706
###Markdown
This testing score is satisfactory for our basic model for two reasons.First, it significantly exceeds our baseline recall score of 0.5 (we kind of cheated using all the data but still).Second, the testing score is very close to our best model's cross validation score on the training data, leading us to believe the model should generalize well.
###Code
confmat = confusion_matrix(y_test, gs.predict(X_test))
fig, ax = plt.subplots(figsize=(5,5))
ax.matshow(confmat, cmap=plt.cm.Blues, alpha=0.3)
for i in range(confmat.shape[0]):
for j in range(confmat.shape[1]):
ax.text(x=j, y=i,
s=confmat[i, j],
va='center', ha='center')
plt.title('Test Data Confusion Matrix')
plt.xlabel('predicted label')
plt.ylabel('true label')
plt.show()
###Output
_____no_output_____
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
accuracy = (36 + 85) / (36 + 85 + 58 + 8)
print ('Accuracy Score: %.3f' % accuracy)
###Output
Accuracy Score: 0.647
###Markdown
Calculate precision
###Code
precision = (36) / (36 + 58)
print ('Precision Score: %.3f' % precision)
###Output
Precision Score: 0.383
###Markdown
Calculate recall
###Code
recall = (36) / (36 + 8)
print ('Recall Score: %.3f' % recall)
###Output
Recall Score: 0.818
###Markdown
Calculate F1 score
###Code
f1 = 2 * (precision * recall) / (precision + recall)
print ('F1 Score: %.3f' % f1)
###Output
F1 Score: 0.522
###Markdown
Calculate False Positve Rate
###Code
fpr = 58 / (58 + 85)
print ('False Positive Rate: %.3f' % fpr)
###Output
False Positive Rate: 0.406
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
###Output
_____no_output_____
###Markdown
Added some cleanup checks
###Code
df.shape
df.head()
df.isnull().sum()
df.describe()
df.dtypes
# I'll re-cast the whole dataset as floats to prevent annoying notifications about dtype
# changes when running GridSearchCV
df = df.astype('float64')
df.head()
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
import numpy as np
from sklearn.metrics import accuracy_score, classification_report
y_val = df['made_donation_in_march_2007']
majority_class = y_val.mode()[0]
y_pred = np.full(shape=df.shape[0], fill_value=majority_class)
print(accuracy_score(y_val, y_pred))
# For a binary variable like this, the accuracy score just reflects the
# distribution of that variable in y
y_val.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
0.0 0.76 1.00 0.86 570
1.0 0.00 0.00 0.00 178
micro avg 0.76 0.76 0.76 748
macro avg 0.38 0.50 0.43 748
weighted avg 0.58 0.76 0.66 748
###Markdown
Majority class baseline is polarizing. The recall of the positive class is zero, because by assuming that nobody donated we retrieved none of the people that did. That is the usual meaning of recall, though we could also talk about the recall of the negative class, in which case we retrieved all of them.In bothcases, recall means the fraction of the relevant instances that have been retrieved, over the total amount of relevant instances. Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
X = df.drop(columns='made_donation_in_march_2007')
y = df['made_donation_in_march_2007']
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=42)
# Verify all shapes
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
# In order to choose my scaler, I'll check first whether there are any stark outliers.
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="ticks", color_codes=True)
# y_vars and x_vars are lists of column names.
sns.pairplot(data=df, y_vars=['made_donation_in_march_2007'], x_vars=X.columns)
plt.show()
# No stark outliers, so I'll go with StandardScaler
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
# Define an estimator and param_grid
pipe = make_pipeline(
StandardScaler(),
SelectKBest(f_classif),
LogisticRegression(solver='lbfgs'))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
param_grid = {
'selectkbest__k': [1,2,3,4],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5,
scoring='recall',
verbose=1)
gs.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
gs.best_score_
gs.best_params_
###Output
_____no_output_____
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36
###Code
TN = 85
FN = 8
FP = 58
TP = 36
total = 85+8+58+36
actual_yes = 8+36
actual_no = 85+58
predicted_yes = 58+36
predicted_no = 85+8
###Output
_____no_output_____
###Markdown
Calculate accuracy
###Code
accuracy = (TP+TN)/total
accuracy
###Output
_____no_output_____
###Markdown
Calculate precision
###Code
precision = TP/predicted_yes
precision
###Output
_____no_output_____
###Markdown
Calculate recall
###Code
recall = TP/actual_yes
recall
###Output
_____no_output_____
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate. Part 1 bonus: feature engineering
###Code
# Based on the pairplot, it looks like high values of months_since_last_donation should
# more starkly predict that a person would not donate in March 2007. Thus, I'll square that
# feature and see if it improves my classification (by giving greater weight to people who
# haven't donated in a long time). I'll create a second dataset, df2, to contain my
# engineered features and remain separate to DF (so I can compare the two.)
df2 = df.copy()
df2['lag_squared'] = df2.months_since_last_donation**2
X2 = df2.drop(columns='made_donation_in_march_2007')
y2 = df['made_donation_in_march_2007']
X2_train, X2_test, y2_train, y2_test = train_test_split(
X2, y2, test_size=0.25, random_state=42)
# Verify all shapes
X2_train.shape, X2_test.shape, y2_train.shape, y2_test.shape
###Output
_____no_output_____
###Markdown
Part 2 bonus: expanded pipelineThe fact that SelectKBest chose `k=1` means that LogisticRegression performs best when it ignores all the othear features and just uses the top one. Thus, feature engineering won't change my recall score unless my engineered feature is so good that SelectKBest changes to `k=2` to include it. Very unlikely scenario, but let's find out.
###Code
# Original pipeline, fit to the new data
param_grid = {
'selectkbest__k': [1,2,3,4],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
pipe = make_pipeline(
StandardScaler(),
SelectKBest(f_classif),
LogisticRegression(solver='lbfgs'))
gs2 = GridSearchCV(pipe, param_grid=param_grid, cv=5,
scoring='recall',
verbose=1)
gs2.fit(X2_train, y2_train);
# The previous best_score was 0.784519402166461
gs2.best_score_
gs2.best_params_
###Output
_____no_output_____
###Markdown
Sure enough, SelectKBest still chooses only one feature and is ignoring my engineered one. Clearly all the other features are so useless that the model is better off ignoring them. The only way I can improve this score, then, is if I can somehow make the best feature even better than it is right now. Enter PCA.
###Code
from sklearn.decomposition import PCA
# I'm getting rid of SelectKBest, since PCA allows me to decide how many components to retain. It achieves
# basically the same result.
param_grid2 = {
'pca__n_components': [1,2,3,4,5],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
pipe2 = make_pipeline(
StandardScaler(),
PCA(),
LogisticRegression(solver='lbfgs'))
gs2 = GridSearchCV(pipe2, param_grid=param_grid2, cv=5,
scoring='recall',
verbose=1)
gs2.fit(X2_train, y2_train);
# The previous best_score was 0.784519402166461
gs2.best_score_
gs2.best_params_
###Output
_____no_output_____
###Markdown
Success!! The recall score went slightly up! Part 3 bonus: printouts and testing
###Code
# Which features were selected by the original gridsearch?
selected_mask = gs.best_estimator_.named_steps['selectkbest'].get_support()
selected_names = X_train.columns[selected_mask]
unselected_names = X_train.columns[~selected_mask]
print('Features selected:')
for name in selected_names:
print(f'> {name}')
print()
print('Features not selected:')
for name in unselected_names:
print(f'> {name}')
###Output
Features selected:
> months_since_last_donation
Features not selected:
> number_of_donations
> total_volume_donated
> months_since_first_donation
###Markdown
Part 1 bonus: extra scores
###Code
F1 = 2*precision*recall/(precision+recall)
F1
FPR = FP/actual_no
FPR
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.style as style
style.use('seaborn-whitegrid')
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
print (df.shape)
df.head()
df.dtypes
df.isnull().sum()
df.corr()
import seaborn as sns
sns.heatmap(df.corr())
y = df['made_donation_in_march_2007']
X = df.drop('made_donation_in_march_2007',axis='columns')
X.shape, y.shape
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.) **The majority class baseline is the mode of the outcome**
###Code
y.value_counts(normalize=True)
majority_class = y.mode()[0]
print(majority_class)
y_pred = np.full(shape=y.shape, fill_value = majority_class)
from sklearn.metrics import accuracy_score
print(f'The accuracy of the MCB is {accuracy_score(y, y_pred)}')
###Output
The accuracy of the MCB is 0.7620320855614974
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.) **Recall = true positive / (true positive + false negative) but with a majority class baseline, the predicted positive is zero because the mode is negative.**
###Code
pd.crosstab(y,y_pred)
# Recall = TP / TP + FN,
TP = 0
FN = 178
recall = TP/(TP + FN)
print(f'The recall score is {recall}')
from sklearn.metrics import recall_score
recall_score(y,y_pred)
###Output
_____no_output_____
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, shuffle = True, random_state=237)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
from sklearn.feature_selection import f_regression, SelectKBest, f_classif
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_validate
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
pipe = make_pipeline(
StandardScaler(),
SelectKBest(f_classif),
LogisticRegression(solver = 'lbfgs')
)
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
%%time
from sklearn.model_selection import GridSearchCV
param_grid = {
'selectkbest__k': range(1, len(X_train.columns)),
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5,
scoring='recall', return_train_score=True,verbose=10)
gs.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 54 candidates, totalling 270 fits
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=balanced, selectkbest__k=1
[CV] logisticregression__C=0.0001, logisticregression__class_weight=balanced, selectkbest__k=1, score=0.6923076923076923, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=balanced, selectkbest__k=1
[CV] logisticregression__C=0.0001, logisticregression__class_weight=balanced, selectkbest__k=1, score=0.7692307692307693, total= 0.0s
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
validation_score = gs.best_score_
print()
print('Best Cross-Validation Score:', validation_score)
# print()
# print('Best estimator:', gs.best_estimator_)
print()
print('Best parameters:', gs.best_params_)
print()
###Output
Best Cross-Validation Score: 0.7921294391882627
Best parameters: {'logisticregression__C': 1.0, 'logisticregression__class_weight': 'balanced', 'selectkbest__k': 2}
###Markdown
**The recall score from the majority class baseline was 0.0. The model with tuned parameters has a recall score of 0.79213.** Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 True Negative (TN) = 85True Positive (TP) = 36False Negative (FN) = 8False Positive (FP) = 58
###Code
TN = 85
TP = 36
FN = 8
FP = 58
###Output
_____no_output_____
###Markdown
Calculate accuracy
###Code
# Accuracy = (TN + TP) / (TN + TP + FN + FP)
accuracy = (TN + TP) / (TN + TP + FP + FN)
print (accuracy)
###Output
0.6470588235294118
###Markdown
Calculate precision
###Code
# Precision = TP / (TP + FP) aka true positive divided by predicted yes
precision = TP / (TP + FP)
print(precision)
###Output
0.3829787234042553
###Markdown
Calculate recall
###Code
# Recall = TP / (TP + FN) aka true positive divided by actual yes aka sensitivity
recall = TP / (TP + FN)
print(recall)
###Output
0.8181818181818182
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate. Part 1
###Code
from sklearn.preprocessing import PolynomialFeatures
for degree in [0, 1, 2, 3]:
features = PolynomialFeatures(degree).fit(X_train).get_feature_names(X_train.columns)
print(f'{degree} degree polynomial has {len(features)} features')
print(features)
print('\n')
def PolynomialRegression(degree=2,**kwargs):
return make_pipeline(PolynomialFeatures(degree),
LogisticRegression(C=1.0,class_weight='balanced', solver='lbfgs' ))
param_grid = {
'polynomialfeatures__degree': [0,1,2,3]
}
gridsearch_fe = GridSearchCV(PolynomialRegression(), param_grid=param_grid,
scoring = 'recall', cv=5,
return_train_score=True,verbose=10)
gridsearch_fe.fit(X_train,y_train)
validation_score = gridsearch_fe.best_score_
print()
print('Best Cross-Validation Score:', validation_score)
# print()
# print('Best estimator:', gs.best_estimator_)
print()
print('Best parameters:', gridsearch_fe.best_params_)
print()
###Output
Best Cross-Validation Score: 0.784519402166461
Best parameters: {'polynomialfeatures__degree': 2}
###Markdown
Part 2
###Code
%%time
from sklearn.feature_selection import RFECV
poly = PolynomialFeatures(degree=2)
X_train_polynomial = poly.fit_transform(X_train)
print(X_train.shape, X_train_polynomial.shape)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_polynomial)
rfe = RFECV(LogisticRegression(C=1.0,class_weight='balanced', solver='lbfgs' ),
scoring='recall',
step=1, cv=5, verbose=1)
X_train_subset = rfe.fit_transform(X_train_scaled, y_train)
param_grid = {
'class_weight': [None, 'balanced'],
'C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
# Fit on the train set, with grid search cross-validation
gs2 = GridSearchCV(LogisticRegression(), param_grid = param_grid, cv=5,
scoring='recall',
verbose=1)
gs2.fit(X_train_subset, y_train)
validation_score = gs2.best_score_
print()
print('Cross-Validation Score:', validation_score)
print()
print('Best estimator:', gs2.best_estimator_)
print()
###Output
Cross-Validation Score: 0.7844508432743728
Best estimator: LogisticRegression(C=0.01, class_weight='balanced', dual=False,
fit_intercept=True, intercept_scaling=1, max_iter=100,
multi_class='warn', n_jobs=None, penalty='l2', random_state=None,
solver='warn', tol=0.0001, verbose=0, warm_start=False)
###Markdown
Part 3
###Code
all_names = poly.get_feature_names(X_train.columns)
selected_mask = rfe.support_
selected_names = [name for name, selected in zip(all_names, selected_mask) if selected]
# unselected_names = all_names[~selected_mask]
print(f'{rfe.n_features_} Features selected:')
for name in selected_names:
print(name )
# for name in unselected_names:
# print(name)
# all_names = poly.get_feature_names(X_train.columns)
# selected_mask = rfe.support_
# selected_names = all_names[selected_mask]
# unselected_names = all_names[~selected_mask]
# print('Features selected:')
# for name in selected_names:
# print(name)
# print()
# print('Features not selected:')
# for name in unselected_names:
# print(name)
from sklearn.metrics import recall_score
# Predict with X_test features
y_pred = gs.predict(X_test)
# Compare predictions to y_test labels
test_score = recall_score(y_test, y_pred)
print('Test Score:', test_score)
###Output
Test Score: 0.8125
###Markdown
Part 4
###Code
pd.crosstab(y_test,y_pred)
TP = 39
TN = 77
FN = 9
FP = 62
precision = TP / (TP+FP)
recall = TP/(TP + FN)
precision, recall
f1_score = (2*precision*recall)/(precision+recall)
print(f'The F1 score is {f1_score}')
# False positive Rate = FP / all actual negative = FP / (FP + TN)
fpn = FP/(FP+TN)
print (f'The false positive rate is {fpn}')
###Output
The false positive rate is 0.4460431654676259
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
df.describe()
df.made_donation_in_march_2007.mode()
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
df.made_donation_in_march_2007.value_counts()
# Accuracy
570/(570+178)
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
# Recall on the majority class is 100% since you would be correct for all true cases of the majority class; recall on the minority class is 0 since you would be incorrect
# for all true cases of the minority class.
###Output
_____no_output_____
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
X = df.drop('made_donation_in_march_2007', axis=1)
y = df.made_donation_in_march_2007
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
from sklearn.preprocessing import StandardScaler, RobustScaler
from sklearn.pipeline import make_pipeline
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import LogisticRegression
pipe = make_pipeline(
RobustScaler(),
SelectKBest(f_regression),
LogisticRegression(solver='lbfgs'))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
from sklearn.model_selection import GridSearchCV
param_grid = {
'selectkbest__k': [1, 2, 3, 4],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5,
scoring='recall',
verbose=1)
gs.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
validation_score = gs.best_score_
print('Cross-Validation Score:', validation_score)
print()
print('Best estimator:', gs.best_estimator_)
###Output
Cross-Validation Score: 0.784519402166461
Best estimator: Pipeline(memory=None,
steps=[('robustscaler', RobustScaler(copy=True, quantile_range=(25.0, 75.0), with_centering=True,
with_scaling=True)), ('selectkbest', SelectKBest(k=1, score_func=<function f_regression at 0x7f8b83ab10d0>)), ('logisticregression', LogisticRegression(C=0.0001, class_weight='balanced', dual=Fal...enalty='l2', random_state=None,
solver='lbfgs', tol=0.0001, verbose=0, warm_start=False))])
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
print('df.shape:', df.shape)
print('df.dtypes:', df.dtypes)
df.head()
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import f_classif, SelectKBest
from sklearn.linear_model import LogisticRegression, Ridge
from sklearn.metrics import recall_score, f1_score, confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import MinMaxScaler, RobustScaler
pd.value_counts(df['made_donation_in_march_2007'], normalize = True)
###Output
_____no_output_____
###Markdown
Our accuracy score would be 0.23 What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.) Recall answers the question: How much of the class that we're intersted in does this model get for us? In this case, since our model is the value of 1 in 'made_donation_in_march_2007', it will have perfect recall. Our recall score will be 1.0 Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
X = df.drop(['made_donation_in_march_2007'], axis = 1)
y = df['made_donation_in_march_2007']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
pipeline = make_pipeline(MinMaxScaler(),
SelectKBest(f_classif),
LogisticRegression())
pipeline
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
param_grid = {
'selectkbest__k': [1, 2, 3, 4],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gs = GridSearchCV(pipeline, param_grid = param_grid, cv = 5,
scoring = 'recall', verbose = 0)
gs.fit(X_train, y_train)
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_search.py:841: DeprecationWarning: The default of the `iid` parameter will change from True to False in version 0.22 and will be removed in 0.24. This will change numeric results when test-set sizes are unequal.
DeprecationWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:323: DataConversionWarning: Data with input dtype int64 were all converted to float64 by MinMaxScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
print('Cross-validation Best Score:', gs.best_score_)
print('Best estimator', gs.best_estimator_)
###Output
Cross-validation Best Score: 0.7786309551015433
Best estimator Pipeline(memory=None,
steps=[('minmaxscaler', MinMaxScaler(copy=True, feature_range=(0, 1))), ('selectkbest', SelectKBest(k=2, score_func=<function f_classif at 0x7f8c255f3378>)), ('logisticregression', LogisticRegression(C=0.1, class_weight='balanced', dual=False,
fit_intercept=True, intercept_scaling=1, max_iter=100,
multi_class='warn', n_jobs=None, penalty='l2', random_state=None,
solver='warn', tol=0.0001, verbose=0, warm_start=False))])
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
accuracy = (36 + 85) / (36 + 58 + 85 + 8)
print('accuracy:', accuracy)
###Output
accuracy: 0.6470588235294118
###Markdown
Calculate precision
###Code
precision = 36 / (36 + 58)
print('precision:', precision)
###Output
precision: 0.3829787234042553
###Markdown
Calculate recall
###Code
recall = 36 / (36 + 8)
print('recall:', recall)
###Output
recall: 0.8181818181818182
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate. Part 1: Feature Engineering
###Code
print('df.dtypes:', df.dtypes)
df.head()
X_train['months_since_last_donation'].hist();
plt.title('Months since last donation');
###Output
_____no_output_____
###Markdown
From this histogram it looks like there are a lot of frequent donations. In 2005 there was a bird flu epidemic in Asia. Those who carried the virus, but did not die, also carried the immunizing agent in their blood. This is probably why there were lots of blood donations. If so, the blood donations started just 2 years earlier. More and more people must have donated blood over those two years. Since people can give blood every 56 days (say 2 months), many of those who gave blood in March, must have given blood 2 months ago. Let's see if this is true.
###Code
X_train['months_since_last_donation'].hist(bins = range(0, 30, 2));
###Output
_____no_output_____
###Markdown
Looks like there is a pattern. But, this pattern is already in the data, so there is no additional information here.
###Code
df2 = X_train
df2 = df2.join(pd.DataFrame(y_train, columns = ['made_donation_in_march_2007']))
df2.head()
sns.pairplot(data = df2,
y_vars = 'made_donation_in_march_2007',
x_vars = df2.drop(['made_donation_in_march_2007'], axis = 1).columns);
###Output
_____no_output_____
###Markdown
Can't think of any new features to add. How about removing features. Looks like months since first donation is not a good discriminator. Remove it.
###Code
X_train = X_train.drop('months_since_first_donation', axis = 1)
pipeline = make_pipeline(MinMaxScaler(),
SelectKBest(f_classif),
LogisticRegression())
pipeline
param_grid = {
'selectkbest__k': [1, 2, 3],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gs = GridSearchCV(pipeline, param_grid = param_grid, cv = 5,
scoring = 'recall', verbose = 0)
gs.fit(X_train, y_train)
print('Cross-validation Best Score:', gs.best_score_)
print('Best estimator', gs.best_estimator_)
###Output
Cross-validation Best Score: 0.7786309551015433
Best estimator Pipeline(memory=None,
steps=[('minmaxscaler', MinMaxScaler(copy=True, feature_range=(0, 1))), ('selectkbest', SelectKBest(k=2, score_func=<function f_classif at 0x7f8c255f3378>)), ('logisticregression', LogisticRegression(C=0.1, class_weight='balanced', dual=False,
fit_intercept=True, intercept_scaling=1, max_iter=100,
multi_class='warn', n_jobs=None, penalty='l2', random_state=None,
solver='warn', tol=0.0001, verbose=0, warm_start=False))])
###Markdown
Still no luck. Let's try with different transformations (RobustScaler) Part2: Different transformations
###Code
pipeline = make_pipeline(RobustScaler(),
SelectKBest(f_classif),
LogisticRegression())
pipeline
param_grid = {
'selectkbest__k': [1, 2, 3],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gs = GridSearchCV(pipeline, param_grid = param_grid, cv = 5,
scoring = 'recall', verbose = 0)
gs.fit(X_train, y_train)
print('Cross-validation Best Score:', gs.best_score_)
print('Best estimator', gs.best_estimator_)
###Output
Cross-validation Best Score: 0.7632737632737633
Best estimator Pipeline(memory=None,
steps=[('robustscaler', RobustScaler(copy=True, quantile_range=(25.0, 75.0), with_centering=True,
with_scaling=True)), ('selectkbest', SelectKBest(k=2, score_func=<function f_classif at 0x7f8c255f3378>)), ('logisticregression', LogisticRegression(C=0.1, class_weight='balanced', dual=False,
...penalty='l2', random_state=None,
solver='warn', tol=0.0001, verbose=0, warm_start=False))])
###Markdown
Try with the full X_train (no dropping columns), and with RobustScaler
###Code
X_train = df2.drop(['made_donation_in_march_2007'], axis = 1)
pipeline = make_pipeline(MinMaxScaler(),
SelectKBest(f_classif),
LogisticRegression())
pipeline
param_grid = {
'selectkbest__k': [1, 2, 3],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gs = GridSearchCV(pipeline, param_grid = param_grid, cv = 5,
scoring = 'recall', verbose = 0)
gs.fit(X_train, y_train)
print('Cross-validation Best Score:', gs.best_score_)
print('Best estimator', gs.best_estimator_)
###Output
Cross-validation Best Score: 0.7786309551015433
Best estimator Pipeline(memory=None,
steps=[('minmaxscaler', MinMaxScaler(copy=True, feature_range=(0, 1))), ('selectkbest', SelectKBest(k=2, score_func=<function f_classif at 0x7f8c255f3378>)), ('logisticregression', LogisticRegression(C=0.1, class_weight='balanced', dual=False,
fit_intercept=True, intercept_scaling=1, max_iter=100,
multi_class='warn', n_jobs=None, penalty='l2', random_state=None,
solver='warn', tol=0.0001, verbose=0, warm_start=False))])
###Markdown
Part3:
###Code
print('Selected features:\n')
X_train.columns
y_pred = gs.predict(X_test)
print('Final test score:', recall_score(y_test, y_pred))
###Output
Final test score: 0.8297872340425532
###Markdown
Part 4: Predict F1 score and calculate False Positive rate
###Code
print('f1 score:', f1_score(y_test, y_pred))
# For the binary prediction, we can extract values
# from the confusion matrix as shown.
tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()
print('False positive rate:', fp / (fp + tn))
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
df.isnull().sum()
df.columns
df.head(1)
df.made_donation_in_march_2007.value_counts()
df.made_donation_in_march_2007.value_counts(normalize=True)
# To get accuracy score with a majority class baseline
y = df['made_donation_in_march_2007']
X = df.drop(columns = 'made_donation_in_march_2007')
majority_class = [0]
y_pred = majority_class * len(y)
from sklearn.metrics import accuracy_score
accuracy_score(y, y_pred)
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
from sklearn.metrics import recall_score
recall_score(y, y_pred)
# recall score is 0, which is expected since, recall = true positive/actual positive,
# and majority class is 0 for baseline model
###Output
_____no_output_____
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
X = df.drop(columns='made_donation_in_march_2007')
y = df['made_donation_in_march_2007']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import RobustScaler
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.metrics import mean_absolute_error
from sklearn.feature_selection import f_classif, SelectKBest
pipeline = make_pipeline(
RobustScaler(),
SelectKBest(f_classif),
LogisticRegression(solver='lbfgs', max_iter=2000))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
from sklearn.model_selection import GridSearchCV
param_grid = {
'selectkbest__k': [1, 2, 3, 4],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gridsearch = GridSearchCV(pipeline, param_grid, cv=5, scoring='recall',
verbose=5, return_train_score=True)
gridsearch.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=1, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=2, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=3, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=4
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=4, score=0.0, total= 0.0s
[CV] logisticregression__C=0.0001, logisticregression__class_weight=None, selectkbest__k=4
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
results = pd.DataFrame(gridsearch.cv_results_)
results.sort_values(by='rank_test_score').head(1).T
###Output
_____no_output_____
###Markdown
Stretch Goal 3
###Code
selector = gridsearch.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print()
print('Features not selected:')
for name in unselected_names:
print(name)
###Output
Features selected:
months_since_last_donation
Features not selected:
number_of_donations
total_volume_donated
months_since_first_donation
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36
###Code
true_negative = 85
false_positive = 58
false_negative = 8
true_positive = 36
actual_negative = 85 + 58
actual_positive = 8 + 36
predicted_negative = 85 + 8
predicted_positive = 58 + 36
###Output
_____no_output_____
###Markdown
Calculate accuracy
###Code
accuracy = ((true_negative + true_positive) /
(true_negative + false_positive + false_negative + true_positive))
print('Accuracy is:', accuracy)
###Output
Accuracy is: 0.6470588235294118
###Markdown
Calculate recall
###Code
recall = true_positive / actual_positive
print('Recall is:', recall)
###Output
Recall is: 0.8181818181818182
###Markdown
Stretch Goal 4
###Code
precision = true_positive / predicted_positive
f1 = 2 * precision*recall / (precision+recall)
print('f1 score is:', f1)
false_positive_rate = false_positive/actual_negative
print('false positive rate is:', false_positive_rate)
###Output
false positive rate is: 0.40559440559440557
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
# Imports to get us started
import numpy as np
import pandas as pd
from sklearn.feature_selection import SelectKBest, f_classif, RFECV
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import mean_absolute_error, accuracy_score, classification_report
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
# an initial look at the data
print(df.shape, '\n')
print(df.isnull().sum(), '\n')
df.head()
###Output
(748, 5)
months_since_last_donation 0
number_of_donations 0
total_volume_donated 0
months_since_first_donation 0
made_donation_in_march_2007 0
dtype: int64
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
# our target value is whether or not a participant made a donation in March (1, 0)
y = df['made_donation_in_march_2007']
majcl = y.mode()[0]
y_pred = np.full(shape=df.shape[0], fill_value=majcl)
# The majority class with a binary classification is simply the
# percentage of the mode in relation to the total observations
print('Majority Class Baseline: ', accuracy_score(y, y_pred), '\n')
print('Majority Class v.s. Minority as % of total :')
df['made_donation_in_march_2007'].value_counts(normalize=True)
###Output
Majority Class Baseline: 0.7620320855614974
Majority Class v.s. Minority as % of total :
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
print(classification_report(y, y_pred));
###Output
precision recall f1-score support
0 0.76 1.00 0.86 570
1 0.00 0.00 0.00 178
avg / total 0.58 0.76 0.66 748
###Markdown
The recall for each is just the True Positives (TP) divided by the sum of the True Positives and False Negatives. It could also be called the true positive rate. Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
X = df.drop(columns='made_donation_in_march_2007')
y = df['made_donation_in_march_2007']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
pipe = make_pipeline(StandardScaler(), SelectKBest(f_classif), LogisticRegression())
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
param_grid = {
'selectkbest__k' : [ 1, 2, 3, 4],
'logisticregression__class_weight' : [None, 'balanced'],
'logisticregression__C' : [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gs = GridSearchCV(pipe,
param_grid= param_grid,
cv= 5,
scoring= 'recall',
verbose=1)
gs.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
print('Best cross-validation score : ', gs.best_score_, '\n')
print('Best parameters : ')
# gs.best_params_
for i in gs.best_params_:
print(i, ':', gs.best_params_[i])
###Output
Best cross-validation score : 0.8152337858220211
Best parameters :
logisticregression__C : 0.0001
logisticregression__class_weight : balanced
selectkbest__k : 2
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
# accuracy = (TruePositives+TrueNegatives)/AllOutcomes
accuracy = (36+85)/(85+58+8+36)
accuracy
###Output
_____no_output_____
###Markdown
Calculate precision
###Code
# precision = TruePositives/(TruePositives+FalsePositives)
precision = 36/(36+58)
precision
###Output
_____no_output_____
###Markdown
Calculate recall
###Code
# recall = TruePositives/(TruePositives+FalseNegatives) --> Sensitivity or True Positive Rate
recall = 36/(36+8)
recall
###Output
_____no_output_____
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate. Part 1 --> Polynomial Features
###Code
poly = PolynomialFeatures(degree=2)
X_train_polynomial = poly.fit_transform(X_train)
print(X_train.shape, X_train_polynomial.shape)
pipe = make_pipeline(StandardScaler(),
SelectKBest(f_classif),
LogisticRegression())
param_grid = {
'selectkbest__k' : [ 1, 2, 3, 4],
'logisticregression__class_weight' : [None, 'balanced'],
'logisticregression__C' : [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gs = GridSearchCV(pipe,
param_grid= param_grid,
cv= 5,
scoring= 'recall',
verbose=1)
gs.fit(X_train_polynomial, y_train)
validation_score = gs.best_score_
print()
print('Cross-Validation Score:', -validation_score)
print()
print('Best estimator:', gs.best_estimator_)
print()
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Part 3:
###Code
selector = gs.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print()
print('Features not selected:')
for name in unselected_names:
print(name)
test_score = gs.score(X_test, y_test)
print('Test Score:', test_score)
###Output
Test Score: 0.8333333333333334
###Markdown
Part 4: F1 Score and False Positive Rate
###Code
# Harmonic Mean of Precision and Recall
F1 = 2*(precision*recall)/(precision+recall)
f'F1 Score: {F1}'
# False Positives divided by False Positives and True Negatives
FPR = (58)/(85+58)
f'False Positive Rate: {FPR}'
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 748 entries, 0 to 747
Data columns (total 5 columns):
months_since_last_donation 748 non-null int64
number_of_donations 748 non-null int64
total_volume_donated 748 non-null int64
months_since_first_donation 748 non-null int64
made_donation_in_march_2007 748 non-null int64
dtypes: int64(5)
memory usage: 29.3 KB
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.) > Our accuracy score with majority class baseline is 0.762 (guessing donation was not made in March 2007)
###Code
# Our accuracy score with majority class baseline is 0.762
# Guessing donation was not made in March 2007
df.made_donation_in_march_2007.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.) > Recall is the number of correctly predicted divided by actual values- Recall for 'Not in March 2007' (0) with majoritiy class baseline would be 1- Recall for 'March 2007' (1) with majoritiy class baseline would be 0
###Code
# Recall is the number of correctly predicted divided by actual values
# Recall for 'Not in March 2007' (0) with majoritiy class baseline would be 1
# Recall for 'March 2007' (1) with majoritiy class baseline would be 0
import numpy as np
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import accuracy_score, classification_report
y_pred = np.full(df.made_donation_in_march_2007.shape,
df.made_donation_in_march_2007.mode()[0])
print(classification_report(df.made_donation_in_march_2007, y_pred))
###Output
precision recall f1-score support
0 0.76 1.00 0.86 570
1 0.00 0.00 0.00 178
micro avg 0.76 0.76 0.76 748
macro avg 0.38 0.50 0.43 748
weighted avg 0.58 0.76 0.66 748
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
# Defining X and Y
X = df.drop(columns='made_donation_in_march_2007')
y = df['made_donation_in_march_2007']
# Splitting data into train & test
# Shuffle parameter is True by default with sklearn's train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=0)
# Checking shape for each set
X_train.shape, y_train.shape, X_test.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.preprocessing import StandardScaler
# Making a pipeline ('pipe')
pipe = make_pipeline(
RobustScaler(), # RobustScaler is robust with outliers
SelectKBest(f_regression),
LogisticRegression(solver='lbfgs'))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
param_grid = {
'selectkbest__k': [1, 2, 3, 4],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
# Fit on the train set, with grid search cross-validation
gs = GridSearchCV(pipe, param_grid=param_grid, cv=3,
scoring='accuracy', # using accuracy score to compare w/baseline
verbose=False)
gs.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
validation_score = gs.best_score_
# flipping validation score from negative to positive with neg sign on front
print('Cross-Validation Score:', validation_score)
print()
print('Best estimator:', gs.best_estimator_)
###Output
Cross-Validation Score: 0.7896613190730838
Best estimator: Pipeline(memory=None,
steps=[('robustscaler', RobustScaler(copy=True, quantile_range=(25.0, 75.0), with_centering=True,
with_scaling=True)), ('selectkbest', SelectKBest(k=4, score_func=<function f_regression at 0x7f777eaac730>)), ('logisticregression', LogisticRegression(C=0.1, class_weight=None, dual=False, fit_i...enalty='l2', random_state=None, solver='lbfgs',
tol=0.0001, verbose=0, warm_start=False))])
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
# Accuracy: overall how often is the classifier correct
# TruePos + TrueNeg / Total
tp = 36 # true positive
tn = 85 # true negative
fp = 58 # false positive
fn = 8 # false negative
total = tp + tn + fp + fn
accuracy = (tp + tn) / total
accuracy
###Output
_____no_output_____
###Markdown
Calculate precision
###Code
# Precision: Probability of correct prediction when it predicts Positive
# TruePos/predicted Positive
precision = tp / (tp + fp)
precision
###Output
_____no_output_____
###Markdown
Calculate recall
###Code
# Recall: How often does the positive condition actually occur
# actual pos / total
recall = (fn + tp) / total
recall
###Output
_____no_output_____
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate.
###Code
df1 = df
df1.head()
# Using copy of df as df1 for Bonus Section
df1.columns
# new features in new DataFrame (df1)
df1['avg_donations_per_month'] = (df1.months_since_first_donation -
df1.months_since_last_donation) / df.number_of_donations
# Defining X and Y
X1 = df1.drop(columns='made_donation_in_march_2007')
y1 = df1['made_donation_in_march_2007']
# Splitting data into train & test
# Shuffle parameter is True by default with sklearn's train_test_split
X_train1, X_test1, y_train1, y_test1 = train_test_split(
X1, y1, test_size=0.25, random_state=0)
# Checking shape for each set
X_train1.shape, y_train1.shape, X_test1.shape, y_test1.shape
# Making a sencond pipeline ('pipe2')
pipe2 = make_pipeline(
StandardScaler(), # <------ Trying StandardScaler now
SelectKBest(f_regression),
LogisticRegression(solver='lbfgs'))
# added new Cs
param_grid1 = {
'selectkbest__k': [1, 2, 3, 4],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .0002, .001, .002, .01, .02, .1, .2,
1.0, 2.0, 10.0, 20.0, 100.00, 200.0, 1000.0,
2000.0, 10000.0, 20000.0]
}
# Fit on the train set, with grid search cross-validation
gs1 = GridSearchCV(pipe2, param_grid=param_grid1, cv=3,
scoring='accuracy', # using accuracy score to compare w/baseline
verbose=False)
gs1.fit(X_train1, y_train1)
validation_score1 = gs1.best_score_
# flipping validation score from negative to positive with neg sign on front
print('Cross-Validation Score:', validation_score1)
print()
print('Best estimator:', gs1.best_estimator_)
selector = gs1.best_estimator_.named_steps['selectkbest']
all_names = X_train1.columns
selected_mask = selector.get_support() # .get_support shows if feature was selected or not
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print()
print('Features not selected:')
for name in unselected_names:
print(name)
# test score
test_score = gs1.score(X_test1, y_test1)
print('Test Score:', test_score)
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3.
###Code
from sklearn.metrics import accuracy_score, confusion_matrix, mean_absolute_error, recall_score
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.model_selection import cross_val_predict, cross_val_score, train_test_split, cross_validate
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.feature_selection import f_classif, SelectKBest
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head(3)
df.shape
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
#made_donation is binary, so let's make baseline all zeroes
prediction = [0] * len(df)
accuracy_score(df['made_donation_in_march_2007'], prediction) #our baseline accuracy is 76.20%
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
recall_score(df['made_donation_in_march_2007'], prediction)
#bonus - feature engineering
df['donations_per_month'] = df['number_of_donations'] / (df['months_since_first_donation'] - df['months_since_last_donation'])
df['volume_per_donation'] = df['total_volume_donated'] / df['number_of_donations']
#I was getting an 'input contains infinites' warning when fitting pipeline
import numpy as np
np.isfinite(df).sum() #...sadly donations_per_month has to be dropped
df.isnull().sum() #...and there are no nans
df.info() #...and they are properly encoded (no string nulls)
df = df.drop(columns='donations_per_month')
###Output
_____no_output_____
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
X = df.drop(columns='made_donation_in_march_2007')
y = df['made_donation_in_march_2007']
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.25)
#one class error
#had to remove an 'outlier cleaning' cell that cleaned all ones
ytrain.value_counts()
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
pipeline = make_pipeline(
RobustScaler(),
SelectKBest(f_classif),
LogisticRegression(solver='lbfgs', max_iter=5000))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
param_grid = {
'selectkbest__k': [1,2,3,4],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0],
'logisticregression__class_weight': [None, 'balanced']
}
gridsearch = GridSearchCV(pipeline, param_grid=param_grid, cv=5, scoring='recall')
gridsearch.fit(Xtrain, ytrain)
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_search.py:841: DeprecationWarning: The default of the `iid` parameter will change from True to False in version 0.22 and will be removed in 0.24. This will change numeric results when test-set sizes are unequal.
DeprecationWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:114: UserWarning: Features [4] are constant.
UserWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/feature_selection/univariate_selection.py:115: RuntimeWarning: invalid value encountered in true_divide
f = msb / msw
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
gsresults = pd.DataFrame(gridsearch.cv_results_).sort_values(by='rank_test_score')
gsresults = gsresults[['rank_test_score', 'mean_test_score', 'mean_train_score',
'param_selectkbest__k', 'param_logisticregression__class_weight',
'param_logisticregression__C']]
gsresults.head(1)
gsresults.loc[37]
###Output
_____no_output_____
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
false_positive = 58
true_positive = 36
false_negative = 8
true_negative = 85
accuracy = ((85 + 36) / (85 + 36 + 8 + 58))
accuracy
###Output
_____no_output_____
###Markdown
Calculate precision
###Code
actual_negative = 85 + 58
actual_positive = 8 + 36
predicted_negative = 85 + 8
predicted_positive = 58 + 36
precision = true_positive / predicted_positive
precision
###Output
_____no_output_____
###Markdown
Calculate recall
###Code
recall = true_positive / actual_positive
recall
#F1 Score
f1 = 2*precision*recall / (precision+recall)
f1
#false positive
false_positive_rate = false_positive/actual_negative
false_positive_rate
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
def ini_preview(df):
print(df.head().T)
print("-"*100)
for i in df.columns:
print(i)
print(df[i].value_counts().index.sort_values())
print("-"*100)
ini_preview(df)
###Output
0 1 2 3 4
months_since_last_donation 2 0 1 2 1
number_of_donations 50 13 16 20 24
total_volume_donated 12500 3250 4000 5000 6000
months_since_first_donation 98 28 35 45 77
made_donation_in_march_2007 1 1 1 1 0
----------------------------------------------------------------------------------------------------
months_since_last_donation
Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 25, 26, 35, 38, 39, 40, 72, 74], dtype='int64')
----------------------------------------------------------------------------------------------------
number_of_donations
Int64Index([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 26, 33, 34, 38, 41, 43, 44, 46, 50], dtype='int64')
----------------------------------------------------------------------------------------------------
total_volume_donated
Int64Index([250, 500, 750, 1000, 1250, 1500, 1750, 2000, 2250, 2500, 2750, 3000, 3250, 3500, 3750, 4000, 4250, 4500, 4750, 5000, 5250, 5500, 5750, 6000, 6500, 8250, 8500, 9500, 10250, 10750, 11000, 11500, 12500], dtype='int64')
----------------------------------------------------------------------------------------------------
months_since_first_donation
Int64Index([2, 3, 4, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 57, 58, 59, 60, 61, 62, 63, 64, 65, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 86, 87, 88, 89, 93, 95, 98], dtype='int64')
----------------------------------------------------------------------------------------------------
made_donation_in_march_2007
Int64Index([0, 1], dtype='int64')
----------------------------------------------------------------------------------------------------
###Markdown
Import
###Code
%matplotlib inline
from scipy import stats
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
import seaborn as sns
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 500)
# preview data
print("df shape:"), print(df.shape), print("---"*20)
print("df columns:"), print(df.columns), print("---"*20)
print("df select_dtypes(include=[np.number]).columns.values:"), print(df.select_dtypes(include=[np.number]).columns.values), print("---"*20)
print("df select_dtypes(exclude=[np.number]).columns:"), print(df.select_dtypes(exclude=[np.number]).columns), print("---"*20)
print("df dtypes.sort_values(ascending=False):"), print(df.dtypes.sort_values(ascending=False)), print("---"*20)
print("df head().T:"), print(df.head().T), print("---"*20)
print("df isnull().sum().sum():"), print(df.isnull().sum().sum()), print("---"*20)
print("df isna().sum().sort_values(ascending=False):"), print(df.isna().sum().sort_values(ascending=False)), print("---"*20)
# nan finder
print("columns[df.isna().any()].tolist():"), print(df.columns[df.isna().any()].tolist()), print("")
# stats data
print("df corr().T:"), print(df.corr().T), print("")
print("df describe(include='all').T:"), print(df.describe(include='all').T), print("")
###Output
df shape:
(748, 5)
------------------------------------------------------------
df columns:
Index(['months_since_last_donation', 'number_of_donations', 'total_volume_donated', 'months_since_first_donation', 'made_donation_in_march_2007'], dtype='object')
------------------------------------------------------------
df select_dtypes(include=[np.number]).columns.values:
['months_since_last_donation' 'number_of_donations' 'total_volume_donated'
'months_since_first_donation' 'made_donation_in_march_2007']
------------------------------------------------------------
df select_dtypes(exclude=[np.number]).columns:
Index([], dtype='object')
------------------------------------------------------------
df dtypes.sort_values(ascending=False):
made_donation_in_march_2007 int64
months_since_first_donation int64
total_volume_donated int64
number_of_donations int64
months_since_last_donation int64
dtype: object
------------------------------------------------------------
df head().T:
0 1 2 3 4
months_since_last_donation 2 0 1 2 1
number_of_donations 50 13 16 20 24
total_volume_donated 12500 3250 4000 5000 6000
months_since_first_donation 98 28 35 45 77
made_donation_in_march_2007 1 1 1 1 0
------------------------------------------------------------
df isnull().sum().sum():
0
------------------------------------------------------------
df isna().sum().sort_values(ascending=False):
made_donation_in_march_2007 0
months_since_first_donation 0
total_volume_donated 0
number_of_donations 0
months_since_last_donation 0
dtype: int64
------------------------------------------------------------
columns[df.isna().any()].tolist():
[]
df corr().T:
months_since_last_donation number_of_donations total_volume_donated months_since_first_donation made_donation_in_march_2007
months_since_last_donation 1.000000 -0.182745 -0.182745 0.160618 -0.279869
number_of_donations -0.182745 1.000000 1.000000 0.634940 0.218633
total_volume_donated -0.182745 1.000000 1.000000 0.634940 0.218633
months_since_first_donation 0.160618 0.634940 0.634940 1.000000 -0.035854
made_donation_in_march_2007 -0.279869 0.218633 0.218633 -0.035854 1.000000
df describe(include='all').T:
count mean std min 25% 50% 75% max
months_since_last_donation 748.0 9.506684 8.095396 0.0 2.75 7.0 14.0 74.0
number_of_donations 748.0 5.514706 5.839307 1.0 2.00 4.0 7.0 50.0
total_volume_donated 748.0 1378.676471 1459.826781 250.0 500.00 1000.0 1750.0 12500.0
months_since_first_donation 748.0 34.282086 24.376714 2.0 16.00 28.0 50.0 98.0
made_donation_in_march_2007 748.0 0.237968 0.426124 0.0 0.00 0.0 0.0 1.0
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
from sklearn.metrics import accuracy_score
# Data source
X = df.drop(columns=["made_donation_in_march_2007"], axis=1)
y = df["made_donation_in_march_2007"]
# Majority class baseline = mode
majority_class = y.mode()[0]
y_pred = np.full(shape=y.shape, fill_value=majority_class)
# Accuracy score
accuracy = accuracy_score(y,y_pred)
print('Accuracy:',accuracy)
###Output
Accuracy: 0.7620320855614974
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
from sklearn.metrics import recall_score
recall = recall_score(y, y_pred)
print('Recall score from majority class baseline:',recall)
###Output
Recall score from majority class baseline: 0.0
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, shuffle=True, test_size=0.25)
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
import warnings
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
# data Process
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest
from sklearn.preprocessing import PolynomialFeatures
# model setup
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.feature_selection import f_classif, SelectKBest
from sklearn.linear_model import LogisticRegression
# metric
from sklearn.model_selection import cross_val_score
from sklearn.metrics import roc_auc_score
pipeline = make_pipeline(
StandardScaler(),
SelectKBest(f_classif),
LogisticRegression(solver = 'lbfgs'))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
# Define param_grid
param_grid = {
'selectkbest__k': [1,2,3,4],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C' : [.0001,.001,.01,.1,1.0,10.0,100.00,1000.0,10000.0]
}
# Fit on the train set, with grid search cross-validation
gs = GridSearchCV(pipeline, param_grid=param_grid,cv=5, scoring='recall', verbose=1)
gs.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
# Cross-Validation Results
validation_score = gs.best_score_
print('Validation Score: ', validation_score)
print('Best parameter:', gs.best_params_)
print('Best estimator:', gs.best_estimator_)
selector = gs.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names=all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print(all_names)
print("-"*100)
print('Features selected:')
for name in selected_names:
print(name)
print("-"*100)
print("Features not selected:")
for name in unselected_names:
print(name)
print("-"*100)
y_pred = gs.predict(X_test)
recall = recall_score(y_test, y_pred)
print('recall_score:', recall)
###Output
Index(['months_since_last_donation', 'number_of_donations', 'total_volume_donated', 'months_since_first_donation'], dtype='object')
----------------------------------------------------------------------------------------------------
Features selected:
months_since_last_donation
total_volume_donated
----------------------------------------------------------------------------------------------------
Features not selected:
number_of_donations
months_since_first_donation
----------------------------------------------------------------------------------------------------
recall_score: 0.8571428571428571
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
true_negative = 85
false_positive = 58
false_negative = 8
true_positive = 36
predicted_negative = true_negative + false_negative
predicted_positive = true_positive + false_positive
actual_negative = true_negative + false_positive
actual_positive = true_positive + false_negative
accuracy = (true_negative + true_positive) / (true_negative + false_positive + false_negative + true_positive)
precision = true_positive / predicted_positive
recall = true_positive / actual_positive
print(accuracy)
###Output
0.6470588235294118
###Markdown
Calculate precision
###Code
print(precision)
###Output
0.3829787234042553
###Markdown
Calculate recall
###Code
print(recall)
###Output
0.8181818181818182
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score.
###Code
from sklearn.preprocessing import RobustScaler
# Data source
X = df.drop(columns=["made_donation_in_march_2007"], axis=1)
y = df["made_donation_in_march_2007"]
# Test polynomialFeatures before split
poly = PolynomialFeatures()
X = poly.fit_transform(X)
X = pd.DataFrame(X)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, shuffle=True, test_size=0.25)
pipeline = make_pipeline(
RobustScaler(),
SelectKBest(f_classif),
LogisticRegression(solver = 'liblinear'))
warnings.filterwarnings(action='ignore', category=RuntimeWarning)
# Define param_grid
param_grid = {
'selectkbest__k': range(1, len(X_train.columns)+1),
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C' : [.0001,.001,.01,.1,1.0,10.0,100.00,1000.0,10000.0]
}
# Fit on the train set, with grid search cross-validation
gs = GridSearchCV(pipeline, param_grid=param_grid,cv=5, scoring='recall', verbose=1)
gs.fit(X_train, y_train)
# Cross-Validation Results
validation_score = gs.best_score_
print('Validation Score: ', validation_score)
print('Best parameter:', gs.best_params_)
print('Best estimator:', gs.best_estimator_)
###Output
Validation Score: 0.8003565062388592
Best paramter: {'logisticregression__C': 0.01, 'logisticregression__class_weight': 'balanced', 'selectkbest__k': 8}
Best estimator: Pipeline(memory=None,
steps=[('robustscaler', RobustScaler(copy=True, quantile_range=(25.0, 75.0), with_centering=True,
with_scaling=True)), ('selectkbest', SelectKBest(k=8, score_func=<function f_classif at 0x7f15064ee510>)), ('logisticregression', LogisticRegression(C=0.01, class_weight='balanced', dual=False,
...ty='l2', random_state=None,
solver='liblinear', tol=0.0001, verbose=0, warm_start=False))])
###Markdown
Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score?
###Code
selector = gs.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names=all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print(all_names)
print("-"*100)
print('Features selected:')
for name in selected_names:
print(name)
print("-"*100)
print("Features not selected:")
for name in unselected_names:
print(name)
print("-"*100)
y_pred = gs.predict(X_test)
recall = recall_score(y_test, y_pred)
print('recall_score:', recall)
###Output
RangeIndex(start=0, stop=15, step=1)
----------------------------------------------------------------------------------------------------
Features selected:
1
2
3
5
8
9
10
12
----------------------------------------------------------------------------------------------------
Features not selected:
0
4
6
7
11
13
14
----------------------------------------------------------------------------------------------------
recall_score: 0.7169811320754716
###Markdown
Part 4Calculate F1 score and False Positive Rate.
###Code
from sklearn.metrics import classification_report, confusion_matrix
print(classification_report(y_test, y_pred))
print("-"*100)
pd.DataFrame(confusion_matrix(y_test, y_pred),
columns=['Predicted Negative', 'Predicted Positive'],
index=['Actual Negative', 'Actual Positive'])
true_negative = 83
false_positive = 51
false_negative = 15
true_positive = 38
predicted_negative = true_negative + false_negative
predicted_positive = true_positive + false_positive
actual_negative = true_negative + false_positive
actual_positive = true_positive + false_negative
accuracy = (true_negative + true_positive) / (true_negative + false_positive + false_negative + true_positive)
precision = true_positive / predicted_positive
recall = true_positive / actual_positive
FPR = false_positive/(false_positive+true_negative)
f1 = 2 * precision*recall / (precision+recall)
print('Accuracy:',accuracy)
print('Precision:',precision)
print('Recall:',recall)
print('False Positive Rate:',FPR)
print('F1 Score:',f1)
###Output
Accuracy: 0.6470588235294118
Precision: 0.42696629213483145
Recall: 0.7169811320754716
False Positive Rate: 0.3805970149253731
F1 Score: 0.5352112676056338
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
import numpy as np
import warnings
from sklearn.dummy import DummyClassifier
from sklearn.pipeline import make_pipeline, Pipeline
from sklearn.metrics import accuracy_score, recall_score
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import RandomForestClassifier
pd.set_option('display.max_columns', None) # all cols
pd.set_option('display.width', 161)
warnings.filterwarnings('ignore')
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.sample(8)
df.describe()
df.info()
df["made_donation_in_march_2007"].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
X = df.drop(columns = 'made_donation_in_march_2007')
y = df['made_donation_in_march_2007']
majority_class = 0
y_pred = [majority_class] * len(y)
print("Accuracy Score:", accuracy_score(y, y_pred))
###Output
Accuracy Score: 0.7620320855614974
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
# Recall Score = True Positive / (True Positive + False Negative)
print("Recall Score:", recall_score(y, y_pred))
###Output
Recall Score: 0.0
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
print("X_train shape:", X_train.shape,
"\nX_test shape:", X_test.shape,
"\ny_train shape:", y_train.shape,
"\ny_test shape:", y_test.shape)
###Output
X_train shape: (561, 4)
X_test shape: (187, 4)
y_train shape: (561,)
y_test shape: (187,)
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
pipeline = make_pipeline(
StandardScaler(),
SelectKBest(f_classif),
LogisticRegression(solver='lbfgs')
)
pipeline
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
param_grid = {
'selectkbest__k': [1, 2, 3, 4],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gs = GridSearchCV(pipeline,
param_grid=param_grid,
cv=5,
scoring='recall',
verbose=1)
gs.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
print('Best Cross-Validation Score:', gs.best_score_)
print('Best parameters:', gs.best_params_ )
print("Test-set score:",gs.score(X_test, y_test))
###Output
Test-set score: 0.7708333333333334
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36
###Code
true_negative = 85
false_positive = 58
false_negative = 8
true_positive = 36
actual_negative = true_negative + false_positive
actual_positive = false_negative + true_positive
predicted_negative = true_negative + false_negative
predicted_positive = false_positive + true_positive
###Output
_____no_output_____
###Markdown
Calculate accuracy
###Code
accuracy = ((true_negative + true_positive) /
(true_negative + false_positive + false_negative + true_positive))
print("Accuracy:", accuracy)
###Output
Accuracy: 0.6470588235294118
###Markdown
Calculate precision
###Code
precision = true_positive / predicted_positive
print("Precision:", precision)
###Output
Precision: 0.3829787234042553
###Markdown
Calculate recall
###Code
recall = true_positive / actual_positive
print("Recall:", recall)
###Output
Recall: 0.8181818181818182
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate.
###Code
pipe = Pipeline([('preprocessing', StandardScaler()), ('classifier', SVC())])
param_grid = [
{'classifier': [SVC()], 'preprocessing': [StandardScaler(), None],
'classifier__gamma': [0.001, 0.01, 0.1, 1, 10, 100],
'classifier__C': [0.001, 0.01, 0.1, 1, 10, 100]},
{'classifier': [RandomForestClassifier(n_estimators=100)],
'preprocessing': [None], 'classifier__max_features': [1, 2, 3]}]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=42)
grid = GridSearchCV(pipe, param_grid, cv=5)
grid.fit(X_train, y_train)
print("Best params:\n", grid.best_params_)
print("==>>Best cross-validation score:", grid.best_score_, "which is an improvement over previous:", gs.best_score_)
print("Test-set score:", grid.score(X_test, y_test), "- previous:", gs.score(X_test, y_test))
###Output
Best params:
{'classifier': SVC(C=100, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma=0.1, kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False), 'classifier__C': 100, 'classifier__gamma': 0.1, 'preprocessing': StandardScaler(copy=True, with_mean=True, with_std=True)}
==>>Best cross-validation score: 0.7950089126559715 which is an improvement over previous: 0.784519402166461
Test-set score: 0.7540106951871658 - previous: 0.7708333333333334
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.isna().sum()
df.head()
# 1 stand for donating blood; 0 stands for not donating blood
df.shape
df.made_donation_in_march_2007.value_counts()
###Output
_____no_output_____
###Markdown
**Mode is 'Did Not Donate'** Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
X = df.drop(columns='made_donation_in_march_2007')
y = df['made_donation_in_march_2007'] == 0
import numpy as np
majority_class = y.mode()[0]
y_pred = np.full(shape=y.shape, fill_value=majority_class)
# no need to calculate accuracy score function
# just use value counts line instead
# these steps are just for demonstration
y.shape, y_pred.shape
# option 1 with sklearn
from sklearn.metrics import accuracy_score
accuracy_score(y, y_pred)
# option 2 with value counts
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
**76.2% major class baseline accuracy** What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
# option 1 with classification report to display recall
from sklearn.metrics import classification_report
print(classification_report(y, y_pred))
# option 2 with recall score
from sklearn.metrics import recall_score
print(recall_score(y,y_pred))
###Output
1.0
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
# reassign X and y so we include all y outcomes
X = df.drop(columns='made_donation_in_march_2007')
y = df['made_donation_in_march_2007']
X.shape, y.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=42, shuffle=True)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
import sklearn.feature_selection as fe
pipeline = make_pipeline(
StandardScaler(), fe.SelectKBest(k=4),
LogisticRegression(solver='lbfgs'))
pipeline_balanced = make_pipeline(
StandardScaler(), fe.SelectKBest(k=4),
LogisticRegression(class_weight='balanced', solver='lbfgs'))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
param_grid = {'selectkbest__k': [1, 2, 3, 4],
'logisticregression__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000],
'logisticregression__class_weight': [None, 'balanced']}
# fit on the train set, with grid search cross-validation
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(pipeline, param_grid=param_grid, cv=5,
scoring='recall',
verbose=1)
gs.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
# best validation score
gs.best_score_
# best parameters
gs.best_params_
###Output
_____no_output_____
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
# (tp+tn)/total
print((85+36)/(85+58+8+36))
###Output
0.6470588235294118
###Markdown
Calculate precision
###Code
# tp/predicted yes
print(36/(58+36))
###Output
0.3829787234042553
###Markdown
Calculate recall
###Code
# tp/actual yes
print(36/(8+36))
###Output
0.8181818181818182
###Markdown
Calculate F1 Score
###Code
# 2*((precision*recall)/(precision+recall))
p_times_r = (36/(58+36))*(36/(8+36))
p_plus_r = (36/(58+36))+(36/(8+36))
f1 = 2*(p_times_r / p_plus_r)
print(f1)
###Output
0.5217391304347826
###Markdown
Calculate False Positive Rate
###Code
# fp/actual no
print(58/(58+85))
###Output
0.40559440559440557
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
import numpy as np
from sklearn.metrics import accuracy_score
# Making X and y dfs
X = df.drop(columns='made_donation_in_march_2007')
y=df['made_donation_in_march_2007']
# Making majority class for mode
majority_class = y.mode()[0]
y_pred = np.full(shape=y.shape, fill_value=majority_class)
# Accuracy score
accuracy_score(y,y_pred)
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
from sklearn.metrics import recall_score
recall_score(y, y_pred)
###Output
_____no_output_____
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, shuffle=True)
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
from sklearn.preprocessing import RobustScaler
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import f_classif, SelectKBest
pipeline = make_pipeline(
RobustScaler(),
SelectKBest(f_classif),
LogisticRegression(solver='lbfgs'))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
from sklearn.model_selection import GridSearchCV
param_grid = {
'selectkbest__k': [1,2,3,4],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C' : [.0001,.001,.01,.1,1.0,10.0,100.00,1000.0,10000.0]
}
gs = GridSearchCV(pipeline, param_grid=param_grid,cv=5,
scoring='recall', verbose=1)
gs.fit(X_train, y_train)
validation_score = gs.best_score_
print('Cross-Validation Score: ', -validation_score)
print()
print('Best estimator:', gs.best_estimator_)
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
print('Cross-Validation Score: ', -validation_score)
print()
print('Best estimator:', gs.best_estimator_)
###Output
Cross-Validation Score: -0.7804338000416432
Best estimator: Pipeline(memory=None,
steps=[('robustscaler', RobustScaler(copy=True, quantile_range=(25.0, 75.0), with_centering=True,
with_scaling=True)), ('selectkbest', SelectKBest(k=2, score_func=<function f_classif at 0x7ff5b6e889d8>)), ('logisticregression', LogisticRegression(C=0.1, class_weight='balanced', dual=False,
...enalty='l2', random_state=None,
solver='lbfgs', tol=0.0001, verbose=0, warm_start=False))])
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
(36+85)/(36+85+58+8)
###Output
_____no_output_____
###Markdown
Calculate precision
###Code
36/(36+58)
###Output
_____no_output_____
###Markdown
Calculate recall
###Code
36/(36+8)
###Output
_____no_output_____
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate.
###Code
df.head()
df['total_volume_donated_sq'] = df['total_volume_donated']**2
df['donations_per_month'] = df['months_since_first_donation']/df['number_of_donations']
import numpy as np
from sklearn.metrics import accuracy_score
# Making X and y dfs
X = df.drop(columns='made_donation_in_march_2007')
y=df['made_donation_in_march_2007']
# Making majority class for mode
majority_class = y.mode()[0]
y_pred = np.full(shape=y.shape, fill_value=majority_class)
# Accuracy score
accuracy_score(y,y_pred)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, shuffle=True)
from sklearn.preprocessing import RobustScaler
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import Ridge
pipeline = make_pipeline(
RobustScaler(),
SelectKBest(f_regression),
LogisticRegression(solver='lbfgs'))
from sklearn.model_selection import GridSearchCV
param_grid = {
'selectkbest__k': range(1, len(X_train.columns)+1),
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C' : [.0001,.001,.01,.1,1.0,10.0,100.00,1000.0,10000.0]
}
gs = GridSearchCV(pipeline, param_grid=param_grid,cv=5,
scoring='recall', verbose=1)
gs.fit(X_train, y_train)
validation_score = gs.best_score_
print('Cross-Validation Score: ', -validation_score)
print()
print('Best estimator:', gs.best_estimator_)
selector = gs.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names=all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print()
print("Features not selected:")
for name in unselected_names:
print(name)
y_pred = gs.predict(X_test)
test_score = recall_score(y_test, y_pred)
print('Test Score:', test_score)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
print('False positive rate:', 69/(71+69))
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.metrics import accuracy_score, classification_report
from sklearn.linear_model import LogisticRegression
from mlxtend.plotting import plot_decision_regions
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.feature_selection import f_classif, SelectKBest
import seaborn as sns
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
print(df.shape)
df.head()
print(type(df.made_donation_in_march_2007[0]))
print(df.made_donation_in_march_2007[0])
###Output
<class 'numpy.int64'>
1
###Markdown
This a classification problem so let's use Classification metricsfirst , establish a majority baseline using the mode value of the target feature : made_donation_in_march_2007
###Code
df.made_donation_in_march_2007.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
majority class is 0 (no donation) No-donations were 76% and donations were only 26% Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
majority_class = df.made_donation_in_march_2007.mode()[0]
y_vals = df.made_donation_in_march_2007
y_vals.shape
y_pred = np.full(shape=y_vals.shape, fill_value=majority_class)
y_pred.shape
from sklearn.metrics import accuracy_score
print("ACCURACY SCORE =",accuracy_score(y_vals, y_pred))
###Output
ACCURACY SCORE = 0.7620320855614974
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
from sklearn.metrics import confusion_matrix
confusion_matrix(y_vals, y_pred)
0/178+0
from sklearn.metrics import recall_score
print('RECALL SCORE =',recall_score(y_vals, y_pred))
###Output
RECALL SCORE = 0.0
###Markdown
** Recall score = 0. Makes sense. Recall is the True Positive rate. SInce the majority class is zero(Negative) all predictions were set to 0 (all negative predictions). With no 'postitve' predictions the True Positive rate is zero. ** Engineer some new features
###Code
# df['avg_donation']=(df.total_volume_donated / df.number_of_donations).astype(float)
###Output
_____no_output_____
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
def train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1,
random_state=None, shuffle=True):
assert train_size + val_size + test_size == 1
X_train_val, X_test, y_train_val, y_test = train_test_split(
X, y, test_size=test_size, random_state=random_state, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=val_size/(train_size+val_size),
random_state=random_state, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
X = df.drop(columns='made_donation_in_march_2007')
y = df.made_donation_in_march_2007
# Uses our custom train_validation_test_split function
X_train, X_val, X_test, y_train, y_val, y_test = train_validation_test_split(
X, y, train_size=0.75, val_size=0.0, test_size=0.25, random_state=1)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
pipe = make_pipeline(
RobustScaler(),
SelectKBest(f_classif),
LogisticRegression())
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
param_grid = {
'selectkbest__k': [1,2,3,4],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__solver': ['liblinear','lbfgs'], #lbfgs
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
# Fit on the train set 5-folds, scoring=recall
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5,
scoring='recall',
verbose=1)
gs.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 144 candidates, totalling 720 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
validation_score = gs.best_score_
print()
print('Cross-Validation Score:', validation_score)
print()
print('Best estimator:', gs.best_estimator_)
print()
# Which features were selected?
selector = gs.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('selectKbest Results\n')
print('Features selected:')
for name in selected_names:
print(name)
print()
print('Features not selected:')
for name in unselected_names:
print(name)
###Output
selectKbest Results
Features selected:
months_since_last_donation
number_of_donations
total_volume_donated
months_since_first_donation
Features not selected:
###Markdown
Final evaluation on the test set
###Code
# use GridSearchCV.score method,
test_score = gs.score(X_test, y_test)
print('Test Score:', test_score)
###Output
Test Score: 0.82
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
(85+36)/(85+58+8+36)
###Output
_____no_output_____
###Markdown
Calculate precision
###Code
36/(36+58)
###Output
_____no_output_____
###Markdown
Calculate recall - True Positive Rate
###Code
36 / (36+8)
###Output
_____no_output_____
###Markdown
F1 score = 2 x (recall x precision) / (recall+precision)
###Code
2*((.81*.38) / (.81+.38))
###Output
_____no_output_____
###Markdown
True Negative Rate = true Negatives / ( true negatives + false Positives)
###Code
85 / (85+58)
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
df.isnull().sum()
df.columns
# Using simple pandas value counts method
df.made_donation_in_march_2007.value_counts(normalize=True)
# Using sklearn accuracy_score
import numpy as np
majority_class = df.made_donation_in_march_2007.mode()[0]
prediction = np.full(shape=df.made_donation_in_march_2007.shape,
fill_value=majority_class)
from sklearn.metrics import accuracy_score
accuracy_score(df.made_donation_in_march_2007, prediction)
###Output
_____no_output_____
###Markdown
Baseline Accuracy Score is 76%. What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
from sklearn.metrics import confusion_matrix
confusion_matrix(df.made_donation_in_march_2007, prediction)
###Output
_____no_output_____
###Markdown
Recall score is Recall = TP/Actual PositiveIn this case it is 0. Confirming the same with sklearn.
###Code
from sklearn.metrics import recall_score
recall_score(df.made_donation_in_march_2007, prediction)
###Output
_____no_output_____
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
# Spliting the independent and dependent variables.
X = df.drop(columns=['made_donation_in_march_2007'])
y = df.made_donation_in_march_2007
# Split data into train and test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.25,
shuffle=True)
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
# Imports for pipeline
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.linear_model import LogisticRegression
# Create pipeline
pipeline = make_pipeline(\
RobustScaler(),
SelectKBest(f_classif),
LogisticRegression(solver='lbfgs'))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
from sklearn.model_selection import GridSearchCV
param_grid = {
'selectkbest__k': [1, 2, 3, 4],
'logisticregression__class_weight': [None,'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gridsearch = GridSearchCV(pipeline, param_grid=param_grid, cv=5,
scoring='recall', verbose=1)
gridsearch.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
print('Cross Validation Score:', gridsearch.best_score_)
print('Best Parameters:', gridsearch.best_params_)
###Output
Cross Validation Score: 0.7938022761552174
Best Parameters: {'logisticregression__C': 1.0, 'logisticregression__class_weight': 'balanced', 'selectkbest__k': 2}
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative TN = 85 FP = 58 Positive FN = 8 TP = 36 Calculate accuracy TP - True PositiveTN - True NegativeFP - False PostiveFN - False Negative
###Code
# Accuracy = (TP+TN)/Total
accuracy = (36+85)/(85+58+8+36)
print('Accuracy is:', accuracy)
###Output
Accuracy is: 0.6470588235294118
###Markdown
Calculate precision
###Code
# Precision = TP/Predicted Positive
precision = 36/(58+36)
print('Precision is:', precision)
###Output
Precision is: 0.3829787234042553
###Markdown
Calculate recall
###Code
# Recall = TP/Actual Positive
recall = 36/(8+36)
print('Recall is:', recall)
# F1 Score = 2*(Recall * Precision) / (Recall + Precision)
f1_score = 2*(recall * precision) / (recall + precision)
print('F1 Score is:', f1_score)
# False Positive Rate = FP/Actual Negative
false_pos_rate = 58/(85+58)
print('False Positive Rate is:', false_pos_rate)
###Output
False Positive Rate is: 0.40559440559440557
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The **goal** is to predict the **last column** = whether the donor made a **donation in March 2007**, using information about each donor's history. We'll measure success using **_recall score_ as the model evaluation metric**.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
# initial imports
import pandas as pd
import numpy as np
from sklearn.metrics import accuracy_score, recall_score
from sklearn.model_selection import train_test_split as tts
from sklearn.model_selection import cross_val_score, GridSearchCV
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
# from sklearn.feature_selection import f_classif
from sklearn.linear_model import LogisticRegression
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
print(df.shape) # 748 rows by 5 columns
print(df.isna().sum()) # zero nan's; thanks, Ryan Herr!
df.head()
###Output
(748, 5)
months_since_last_donation 0
number_of_donations 0
total_volume_donated 0
months_since_first_donation 0
made_donation_in_march_2007 0
dtype: int64
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
# make copy of df, work w copy going forward
df1 = df.copy()
# will refrain, in this cell, from yet doing tts on df1
# Hat Tip to Ryan Herr/LSDS
X = df1.drop('made_donation_in_march_2007', axis='columns')
y_true = df1.made_donation_in_march_2007
majority_class = y_true.mode()[0]
y_pred = np.full(shape=y_true.shape, fill_value=majority_class)
# validate
print(y_true.shape, y_pred.shape)
all(y_pred == majority_class)
# compute accuracy_score
print('accuracy score is:', accuracy_score(y_true, y_pred))
###Output
accuracy score is: 0.7620320855614974
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
# compute recall_score
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html
'''
The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives.
The recall is intuitively the ability of the classifier to find all the positive samples.
'''
print('recall score is:', recall_score(y_true, y_pred, average=None))
###Output
recall score is: [1. 0.]
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
# generate cross_val_score_model
# cf. https://github.com/johnpharmd/DS-Unit-2-Sprint-4-Model-Validation/blob/master/module-1-begin-modeling-process/LS_DS_241_Begin_modeling_process_LIVE_LESSON.ipynb
X_train, X_test, y_train, y_test = tts(X, y_true, shuffle=True)
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
# make pipeline, which is kernel_svm
# hat tip to Ryan Herr/LSDS for following URL:
# https://github.com/rasbt/python-machine-learning-book/blob/master/code/bonus/svm_iris_pipeline_and_gridsearch.ipynb
cls = SVC(C=10.0, kernel='rbf', gamma=0.1, decision_function_shape='ovr')
kernel_svm = Pipeline([('std', StandardScaler()), ('svc', cls)])
# select features using SelectKBest
features = SelectKBest(f_classif, k=3)
# perform classification using LogReg
log_reg = LogisticRegression().fit(X_train, y_train)
###Output
C:\Users\jhump\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
# perform GridSearchCV
# make param_grid
param_grid = [{'svc__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0],
'svc__gamma': [0.001, 0.0001], 'svc__kernel': ['rbf']},]
param_grid_adjust = [{'k': [1, 2, 3, 4], 'class_weight': [None, 'balanced'],
'svc__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]},]
# make gs object
gs = GridSearchCV(estimator=kernel_svm, param_grid=param_grid_adjust,
scoring='recall',
n_jobs=-1,
cv=5,
verbose=1,
refit=True,
pre_dispatch='2*n_jobs')
# run gs
gs.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
# display best gscv score, and the best parameters, from the gs
# best cv score
print('Best GS Score %.2f' % gs.best_score_)
# best parameters COMMENT: need to refactor param_grid for k, class_weight, and C
print('best GS Params %s' % gs.best_params_)
###Output
Best GS Score 0.09
best GS Params {'svc__C': 10000.0, 'svc__gamma': 0.001, 'svc__kernel': 'rbf'}
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
# accuracy == (TP + TN)/Total
accuracy = (36 + 85)/187
print('accuracy is:', accuracy)
###Output
accuracy is: 0.6470588235294118
###Markdown
Calculate precision
###Code
# precision == TP/(TP + FP)
precision = 36/(36 + 58)
print('precision is:', precision)
###Output
precision is: 0.3829787234042553
###Markdown
Calculate recall
###Code
# recall == sensitivity == TP/P
recall = 36/44
print('recall is:', recall)
###Output
recall is: 0.8181818181818182
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import numpy as np
from sklearn.metrics import recall_score
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
from sklearn.feature_selection import SelectKBest, f_regression, f_classif
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
df.shape
df.dtypes
df.isna().sum()
df.head()
#ok so the data is numeric and in binary classification form, coolsies!
df.describe()
df.made_donation_in_march_2007.mean()
from sklearn.metrics import mean_absolute_error
baseline = [df.made_donation_in_march_2007.mean()] * len(df.made_donation_in_march_2007)
mean_absolute_error(df.made_donation_in_march_2007, baseline)
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
maj_classification = df.made_donation_in_march_2007.mode()
y_pred = np.full(shape=df['made_donation_in_march_2007'].shape, fill_value=maj_classification)
recall_score(df['made_donation_in_march_2007'], y_pred)
###Output
_____no_output_____
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
X = df.drop(['made_donation_in_march_2007'], axis = 1)
y = df['made_donation_in_march_2007']
X_train, X_test, y_train, y_test = train_test_split(X, y, shuffle = True, train_size = .75, test_size = .25)
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
pipeline = make_pipeline(
StandardScaler(),
SelectKBest(f_classif),
LogisticRegression()
)
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
# 'kernal':['logistic']
# 'selectkbest__k': range(0, 5),
# 'class_weight':['balanced', None],
# 'C': [.0001, .001, .01, .1, 1, 10, 100, 1000, 10000]
param_grid = {
'logisticregression__C': [.0001, .001, .01, .1, 1, 10, 100, 1000, 10000],
'logisticregression__class_weight':['balanced', None],
'selectkbest__k': range(1, 5)}
gs = GridSearchCV(pipeline, param_grid=param_grid, cv=5,
scoring='recall',
verbose=0)
gs.fit(X_train, y_train)
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_search.py:841: DeprecationWarning: The default of the `iid` parameter will change from True to False in version 0.22 and will be removed in 0.24. This will change numeric results when test-set sizes are unequal.
DeprecationWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype int64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
validation_score = gs.best_score_
print('Cross-Validation Score:', validation_score)
print('Best estimator:', gs.best_estimator_)
selector = gs.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print('Features not selected:')
for name in unselected_names:
print(name)
y_pred = gs.predict(X_test)
test_score = mean_absolute_error(y_test, y_pred)
print('Test Score:', test_score)
###Output
Test Score: 0.36363636363636365
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
#the number you predicted correctly divided by every one of your predictions
acc = (85 + 36) / (85+36+58+8)
acc
###Output
_____no_output_____
###Markdown
Calculate precision
###Code
#number of times you guessed yes and were right divided by your total predicted yeses
prec = 36 / (58 + 36)
prec
###Output
_____no_output_____
###Markdown
Calculate recall
###Code
#the number of times you guessed yes and were correct divided by total yeses in the set
recall = 36 / (8+36)
recall
'''F1 Score'''
2 * ((prec * recall) / (prec + recall))
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
df.dtypes
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
X = df.drop(columns='made_donation_in_march_2007')
y = df['made_donation_in_march_2007'] == 1
majority_class = y.mode()[0]
y_pred = np.full(shape=y.shape, fill_value=majority_class)
from sklearn.metrics import accuracy_score
acc = accuracy_score(y, y_pred)
print('Accuracy SCore: ', acc)
X.dtypes
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
from sklearn.metrics import recall_score
rec = recall_score(y, y_pred, average='weighted')
print('Recall Score: ', rec)
###Output
Recall Score: 0.7620320855614974
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=.75, test_size=.25, random_state=None, shuffle=True)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.metrics import roc_auc_score
pipeline = make_pipeline(
StandardScaler(),
SelectKBest(f_classif, k='all'),
LogisticRegression(random_state=0, solver='lbfgs'))
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
acc1 = accuracy_score(y_test, y_pred)
roc = roc_auc_score(y_test, y_pred)
print('New Accuracy Score: ', acc1)
print()
print('Roc_Auc Score: ', roc)
print()
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns
def confusion_viz(y_true, y_pred):
matrix = confusion_matrix(y_true, y_pred)
return sns.heatmap(matrix, annot=True,
fmt=',', linewidths=1, linecolor='grey',
square=True,
xticklabels=['Predicted\nNo', 'Predicted\nYes'],
yticklabels=['Actual\nNo', 'Actual\nYes'])
confusion_matrix(y_test, y_pred)
confusion_viz(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
import numpy as np
import pandas as pd
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
pipe = make_pipeline(
RobustScaler(),
SelectKBest(f_regression),
Ridge())
param_grid = {
'selectkbest__k': (1, 2, 3, 4),
'ridge__alpha': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
gs = GridSearchCV(pipe, param_grid=param_grid, cv=3,
scoring='neg_mean_absolute_error',
verbose=1)
gs.fit(X_train, y_train)
###Output
Fitting 3 folds for each of 36 candidates, totalling 108 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
validation_score = gs.best_score_
print()
print('Cross-Validation Score:', -validation_score)
print()
print('Best estimator:', gs.best_estimator_)
print()
print('Best Params:', gs.best_params_)
print()
###Output
Cross-Validation Score: 0.3545252126722372
Best estimator: Pipeline(memory=None,
steps=[('robustscaler', RobustScaler(copy=True, quantile_range=(25.0, 75.0), with_centering=True,
with_scaling=True)), ('selectkbest', SelectKBest(k=4, score_func=<function f_regression at 0x7f84f2a7a730>)), ('ridge', Ridge(alpha=0.0001, copy_X=True, fit_intercept=True, max_iter=None,
normalize=False, random_state=None, solver='auto', tol=0.001))])
Best Params: {'ridge__alpha': 0.0001, 'selectkbest__k': 4}
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
accuracy = (36+8)/ (85+58+8+36)
print(accuracy)
###Output
0.6470588235294118
###Markdown
Calculate precision
###Code
precision = 36 / (85+36)
print(precision)
###Output
0.5944055944055944
###Markdown
Calculate recall
###Code
recall = 36 / (36+58)
print(recall)
###Output
0.9139784946236559
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
df.columns
# numpy method
import numpy as np
majority_class = df['made_donation_in_march_2007'].mode()[0]
y_pred = np.full(shape=df['made_donation_in_march_2007'].shape, fill_value=majority_class)
df['made_donation_in_march_2007'].shape, y_pred.shape
# majority class baseline
from sklearn.metrics import accuracy_score
accuracy_score(df['made_donation_in_march_2007'], y_pred)
# pandas function method
df.made_donation_in_march_2007.value_counts(normalize=True)
# pandas method 2
1 - df.made_donation_in_march_2007.mean()
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.) Recall for majority class baseline : Zero relevant(donor who return) are selected.Recall = 0/(0+0) = 0Zero is the majority class label. Majority (76%) donors did not return.
###Code
from sklearn.metrics import classification_report
print(classification_report(df['made_donation_in_march_2007'], y_pred))
###Output
precision recall f1-score support
0 0.76 1.00 0.86 570
1 0.00 0.00 0.00 178
micro avg 0.76 0.76 0.76 748
macro avg 0.38 0.50 0.43 748
weighted avg 0.58 0.76 0.66 748
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
# Small dataset - two ways split
from sklearn.model_selection import train_test_split
X=df.drop('made_donation_in_march_2007', axis=1)
y=df['made_donation_in_march_2007']
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=42, shuffle=True)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_validate
scores = cross_validate(LogisticRegression(solver='lbfgs'), X_train, y_train,
scoring='accuracy', cv=3,
return_train_score=True, return_estimator=True)
pd.DataFrame(scores)
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import f_classif, SelectKBest
pipe = make_pipeline(
StandardScaler(),
SelectKBest(f_classif),
LogisticRegression())
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.metrics.scorer import make_scorer
from sklearn.model_selection import GridSearchCV
import warnings
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=FutureWarning)
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
#df=df.astype('float64')
param_grid = {
'selectkbest__k': [1,2,3,4],
'logisticregression__class_weight': [None,'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
recall_scorer = make_scorer(recall_score)
precision_scorer = make_scorer(precision_score)
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5,
#scoring= 'recall',
scoring=recall_scorer,
return_train_score=True,
verbose=0)
gs.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
validation_score = gs.best_score_
print()
print(' Best validation Recall Score:', validation_score)
print()
print('Best paramter:', gs.best_params_)
print()
print('Best estimator:', gs.best_estimator_)
print()
results = pd.DataFrame(gs.cv_results_)
print(f'Best result from grid search of {len(results)} parameter combinations')
results.sort_values(by='rank_test_score').head()
selector = gs.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print()
print('Features not selected:')
for name in unselected_names:
print(name)
###Output
Features selected:
months_since_last_donation
number_of_donations
Features not selected:
total_volume_donated
months_since_first_donation
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
accuracy=(85+36)/(85+58+8+36)
# true positive / overall positive
precision= 36/(36+58) # num of selected are relevant
# true positive / selected positive
recall=36/(36+8) # num of relevant are selected
# false positive
fpr=58/(58+85)
###Output
_____no_output_____
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate.
###Code
# Part 3
from sklearn.metrics import precision_score
y_pred = gs.predict(X_test)
accuracy=accuracy_score(y_test, y_pred)
precision=precision_score(y_test, y_pred) # selected are relevant
recall=recall_score(y_test, y_pred) # relevant are selected
print(f'Accuracy={accuracy}, Precision={precision}, Recall={recall}')
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
gs_test_score = gs.score(X_test, y_test)
print('GridSearchCV Test Score:', gs_test_score)
# Part 4
#The F1 score can be interpreted as a weighted average of the precision and recall.
#F1 score reaches its best value at 1 and worst score at 0.
#F1 = 2 * (precision * recall) / (precision + recall)
from sklearn.metrics import f1_score
f1_sklearn=f1_score(y_test, y_pred)
f1_manual=2 * (precision * recall)/(precision + recall)
print(f'F1_sklearn={f1_sklearn}, F1_manual={f1_manual}')
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
accuracy=(72+40)/(72+40+8+67)
# true positive / overall positive
precision= 40/(40+67) # num of selected are relevant
# true positive / selected positive
recall=40/(40+8) # num of relevant are selected
# false positive
fpr=67/(67+72)
print('Manual=Precision={:0.2f}, Recall={:0.2f}, Accuracy={:0.2f}, \
False positive={:0.2f}'.format(precision,recall,accuracy,fpr))
accuracy=accuracy_score(y_test, y_pred)
precision=precision_score(y_test, y_pred) # selected are relevant
recall=recall_score(y_test, y_pred)
print('Sklearn=Precision={:0.2f}, Recall={:0.2f}, Accuracy={:0.2f}, \
False positive={:0.2f}'.format(precision,recall,accuracy,fpr))
import matplotlib.pyplot as plt
import seaborn as sns
def confusion_viz(y_true, y_pred, normalize=False):
matrix = confusion_matrix(y_true, y_pred)
if (normalize):
matrix = matrix.astype('float') / matrix.sum(axis=1)[:, np.newaxis] # normalize
matrix = np.round(matrix,2)
return sns.heatmap(matrix, annot=True,
fmt=',', linewidths=1, linecolor='grey',
square=True,
xticklabels=['Predicted\nNo', 'Predicted\nYes'],
yticklabels=['Actual\nNo', 'Actual\nYes'])
confusion_viz(y_test, y_pred, normalize=False);
confusion_viz(y_test, y_pred, normalize=True)
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
import numpy as np
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
target = df["made_donation_in_march_2007"]
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import classification_report
target.mode()
baseline = np.full(shape=target.shape, fill_value=target.mode())
mean_absolute_error(target, baseline)
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.) Answerthe recall would be 1 for the 0 class and 0 for the 1 class. with an average recall of 1-.237
###Code
print(classification_report(target, baseline))
###Output
precision recall f1-score support
0 0.76 1.00 0.86 570
1 0.00 0.00 0.00 178
avg / total 0.58 0.76 0.66 748
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df.drop(["made_donation_in_march_2007"],axis=1),target)
X_train.head()
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import LogisticRegression
pipe = make_pipeline(RobustScaler(), SelectKBest(f_regression), LogisticRegression())
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
class_weight = [None,'balanced']
param_grid = {'selectkbest__k' : range(1,len(X_train.columns)+1),
'logisticregression__class_weight': class_weight,
'logisticregression__C' : [0.0001, 0.001, 0.01, 1.0 ,1.0,10.0,100.0,1000.0,10000.0] }
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5, scoring ='accuracy')
gs.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
validation_score = gs.best_score_
print("best cross validation score:", gs.best_score_,"\n")
print("best C params: ", gs.best_params_["logisticregression__C"])
print("best SelectKBest params: ", gs.best_params_["selectkbest__k"])
y_pred = gs.predict(X_test)
print(classification_report(y_test,y_pred))
###Output
precision recall f1-score support
0 0.80 0.94 0.86 143
1 0.55 0.25 0.34 44
avg / total 0.74 0.78 0.74 187
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
t = 85+58+36+8
s = (85+36)/t
print(s)
###Output
0.6470588235294118
###Markdown
accuracy is 64% Calculate precision
###Code
n = 85/(85+8)
p= 36/(36+58)
print(n,"\n",p)
###Output
0.9139784946236559
0.3829787234042553
###Markdown
precision for negatives is 91%precision for positives is 38% Calculate recall
###Code
n = 85 / (85+58)
p = (36/44)
print(n,"\n",p)
###Output
0.5944055944055944
0.8181818181818182
###Markdown
recall for negatives is 59%recall for positives is 82% BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate. Feature engineering
###Code
X_train.head()
def feature_engineering(X_train):
X_train["active_months"] = X_train["months_since_first_donation"]- X_train["months_since_last_donation"]
X_train["active_months"] = X_train["active_months"] +1
X_train["donations_per_month"] = X_train["number_of_donations"] /X_train["active_months"]
X_train["sqrt_months_since_last_donation"] = np.sqrt(X_train["months_since_last_donation"])
X_train["sqrt_number_of_donations"]=np.sqrt(X_train["number_of_donations"])
return X_train
import seaborn as sns
sns.distplot(np.sqrt(X_train["months_since_last_donation"]))
import matplotlib.pyplot as plt
plt.scatter(X_train["donations_per_month"],y_train)
from sklearn.decomposition import PCA
X_train_fe.columns
range(1,len(X_train_fe.columns.values)+1)
X_train, X_test, y_train, y_test = train_test_split(df.drop(["made_donation_in_march_2007"],axis=1),target)
X_train_fe = feature_engineering(X_train)
X_test_fe = feature_engineering(X_test)
pipe = make_pipeline(RobustScaler(), SelectKBest(f_regression), LogisticRegression())
class_weight = [{0:1,1:1}, {0:1,1:10},{0:1,1:25}, {0:1,1:100}]
param_grid = {'selectkbest__k' : range(1,len(X_train_fe.columns.values)),
'logisticregression__class_weight': class_weight,
'logisticregression__C' : [1.0,10.0, 12.5, 25.,100.0,] }
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5, scoring ='accuracy')
gs.fit(X_train_fe,y_train)
y_pred = gs.predict(X_test_fe)
print("best cross validation score:", gs.best_score_,"\n")
print("best C params: ", gs.best_params_["logisticregression__C"])
print("best SelectKBest params: ", gs.best_params_["selectkbest__k"])
print(classification_report(y_test,y_pred))
print("false positive rate: ")
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_pred)
def confusion_viz(y_true, y_pred):
matrix = confusion_matrix(y_true,y_pred)
return sns.heatmap(matrix, annot=True,
fmt=',', linewidths=1,linecolor='grey',
square=True,
xticklabels=['Predicted\nNO', 'Predicted\nYES'],
yticklabels=['Actual\nNO', 'Actual\nYES'])
confusion_viz(y_test, y_pred)
print("false positive rate", 11/(11+134))
import sklearn.feature_selection as fe
d = fe.SelectKBest(fe.mutual_info_regression, k=7).fit(X_train_fe, y_train)
d_scores = pd.Series(data=d.scores_, name='d_scores')
for i in range(0,len(X_train_fe.columns.values)):
print(X_train.columns.values[i])
print(d_scores[i],"\n")
###Output
months_since_last_donation
0.058125578236169595
number_of_donations
0.03560639453530268
total_volume_donated
0.04309113779824614
months_since_first_donation
0.052028520153594826
active_months
0.04063397067393115
donations_per_month
0.0
sqrt_months_since_last_donation
0.050534906911157584
sqrt_number_of_donations
0.08690550830259713
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import mean_absolute_error, accuracy_score, recall_score, roc_auc_score, confusion_matrix
X = df.drop(columns=["made_donation_in_march_2007"], axis=1)
y = df["made_donation_in_march_2007"]
y_pred = [y.mode() for i in range(len(y))]
print(accuracy_score(y,y_pred))
print(y.mode())
###Output
0.7620320855614974
0 0
dtype: int64
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
print(recall_score(y,y_pred))
# This is as there are no positives predicted in this case, and as recall = TP/FN, recall = 0
###Output
0.0
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest
pipeline = Pipeline([('scale', StandardScaler()),
('kbest', SelectKBest()),
('model', LogisticRegression())])
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
from sklearn.model_selection import GridSearchCV
import warnings
warnings.filterwarnings('ignore')
params = {"kbest__k": [1,2,3,4],
"model__class_weight": [None, 'balanced'],
"model__C": [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]}
grid_model = GridSearchCV(pipeline, params, scoring='recall', cv=5, verbose=10)
grid_model.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
[CV] kbest__k=1, model__C=0.0001, model__class_weight=None ...........
[CV] kbest__k=1, model__C=0.0001, model__class_weight=None, score=0.0, total= 0.0s
[CV] kbest__k=1, model__C=0.0001, model__class_weight=None ...........
[CV] kbest__k=1, model__C=0.0001, model__class_weight=None, score=0.0, total= 0.0s
[CV] kbest__k=1, model__C=0.0001, model__class_weight=None ...........
[CV] kbest__k=1, model__C=0.0001, model__class_weight=None, score=0.0, total= 0.0s
[CV] kbest__k=1, model__C=0.0001, model__class_weight=None ...........
[CV] kbest__k=1, model__C=0.0001, model__class_weight=None, score=0.0, total= 0.0s
[CV] kbest__k=1, model__C=0.0001, model__class_weight=None ...........
[CV] kbest__k=1, model__C=0.0001, model__class_weight=None, score=0.0, total= 0.0s
[CV] kbest__k=1, model__C=0.0001, model__class_weight=balanced .......
[CV] kbest__k=1, model__C=0.0001, model__class_weight=balanced, score=0.7307692307692307, total= 0.0s
[CV] kbest__k=1, model__C=0.0001, model__class_weight=balanced .......
[CV] kbest__k=1, model__C=0.0001, model__class_weight=balanced, score=0.6538461538461539, total= 0.0s
[CV] kbest__k=1, model__C=0.0001, model__class_weight=balanced .......
[CV] kbest__k=1, model__C=0.0001, model__class_weight=balanced, score=0.9230769230769231, total= 0.0s
[CV] kbest__k=1, model__C=0.0001, model__class_weight=balanced .......
[CV] kbest__k=1, model__C=0.0001, model__class_weight=balanced, score=0.8846153846153846, total= 0.0s
[CV] kbest__k=1, model__C=0.0001, model__class_weight=balanced .......
[CV] kbest__k=1, model__C=0.0001, model__class_weight=balanced, score=0.8076923076923077, total= 0.0s
[CV] kbest__k=1, model__C=0.001, model__class_weight=None ............
[CV] kbest__k=1, model__C=0.001, model__class_weight=None, score=0.0, total= 0.0s
[CV] kbest__k=1, model__C=0.001, model__class_weight=None ............
[CV] kbest__k=1, model__C=0.001, model__class_weight=None, score=0.0, total= 0.0s
[CV] kbest__k=1, model__C=0.001, model__class_weight=None ............
[CV] kbest__k=1, model__C=0.001, model__class_weight=None, score=0.0, total= 0.0s
[CV] kbest__k=1, model__C=0.001, model__class_weight=None ............
[CV] kbest__k=1, model__C=0.001, model__class_weight=None, score=0.0, total= 0.0s
[CV] kbest__k=1, model__C=0.001, model__class_weight=None ............
[CV] kbest__k=1, model__C=0.001, model__class_weight=None, score=0.0, total= 0.0s
[CV] kbest__k=1, model__C=0.001, model__class_weight=balanced ........
[CV] kbest__k=1, model__C=0.001, model__class_weight=balanced, score=0.7307692307692307, total= 0.0s
[CV] kbest__k=1, model__C=0.001, model__class_weight=balanced ........
[CV] kbest__k=1, model__C=0.001, model__class_weight=balanced, score=0.6538461538461539, total= 0.0s
[CV] kbest__k=1, model__C=0.001, model__class_weight=balanced ........
[CV] kbest__k=1, model__C=0.001, model__class_weight=balanced, score=0.9230769230769231, total= 0.0s
[CV] kbest__k=1, model__C=0.001, model__class_weight=balanced ........
[CV] kbest__k=1, model__C=0.001, model__class_weight=balanced, score=0.8846153846153846, total= 0.0s
[CV] kbest__k=1, model__C=0.001, model__class_weight=balanced ........
[CV] kbest__k=1, model__C=0.001, model__class_weight=balanced, score=0.8076923076923077, total= 0.0s
[CV] kbest__k=1, model__C=0.01, model__class_weight=None .............
[CV] kbest__k=1, model__C=0.01, model__class_weight=None, score=0.0, total= 0.0s
[CV] kbest__k=1, model__C=0.01, model__class_weight=None .............
[CV] kbest__k=1, model__C=0.01, model__class_weight=None, score=0.0, total= 0.0s
[CV] kbest__k=1, model__C=0.01, model__class_weight=None .............
[CV] kbest__k=1, model__C=0.01, model__class_weight=None, score=0.0, total= 0.0s
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
print(grid_model.best_estimator_.score(X_test, y_test))
print(grid_model.best_params_)
###Output
0.5989304812834224
{'kbest__k': 2, 'model__C': 0.0001, 'model__class_weight': 'balanced'}
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
conf_mat = pd.DataFrame([[85,58],[8,36]], columns=["Negative", "Positive"], index=["Negative", "Positive"])
accuracy = (conf_mat.loc["Positive"]["Positive"] + conf_mat.loc["Negative"]["Negative"]) / (conf_mat["Positive"].sum() + conf_mat["Negative"].sum())
print(accuracy)
###Output
0.6470588235294118
###Markdown
Calculate precision
###Code
precision = conf_mat.loc["Positive"]["Positive"] / (conf_mat.loc["Positive"]["Positive"] + conf_mat.loc["Negative"]["Negative"])
print(precision)
###Output
0.2975206611570248
###Markdown
Calculate recall
###Code
recall = conf_mat.loc["Positive"]["Positive"] / (conf_mat.loc["Positive"]["Positive"] + conf_mat.loc["Positive"]["Negative"])
print(recall)
###Output
0.8181818181818182
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate.
###Code
# Part 1
for c in X_train:
df["inv_{}".format(c)] = 1.0 / df[c]
df = df[df.replace([np.inf, -np.inf], np.nan).notnull().all(axis=1)]
X = df.drop(columns=["made_donation_in_march_2007"], axis=1)
y = df["made_donation_in_march_2007"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
df.describe()
# Part 2
from sklearn.base import BaseEstimator, TransformerMixin
import numpy as np
class GaussianFeatures(BaseEstimator, TransformerMixin):
"""Uniformly spaced Gaussian features for one-dimensional input"""
def __init__(self, N=2, width_factor=2.0):
self.N = N
self.width_factor = width_factor
@staticmethod
def _gauss_basis(x, y, width, axis=None):
arg = (x - y) / width
return np.exp(-0.5 * np.sum(arg ** 2, axis))
def fit(self, X, y=None):
# create N centers spread along the data range
self.centers_ = np.zeros((self.N, X.shape[1]))
self.width_ = np.zeros(X.shape[1])
for i in range(X.shape[1]):
self.centers_[:,i] = np.linspace(X[:,i].min(), X[:,i].max(), self.N)
self.width_[i] = self.width_factor * (self.centers_[1,i] - self.centers_[0,i])
return self
def transform(self, X):
out = np.zeros((X.shape[0], self.N * X.shape[1]))
for i in range(X.shape[1]):
out[:,i] = self._gauss_basis(X[:, i, np.newaxis], self.centers_[:,i], self.width_[i], axis=1)
return out
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import RandomizedSearchCV
from sklearn.feature_selection import RFECV
pipeline = Pipeline([('scale', StandardScaler()),
('features', FeatureUnion([("poly", PolynomialFeatures()), ("gauss", GaussianFeatures())])),
('kbest', SelectKBest()),
('model', LogisticRegression())])
params = {"kbest__k": [i for i in range(5,20+1,5)],
"model__class_weight": [None, 'balanced'],
"model__C": [.0001, .01, 1.0, 100.00, 10000.0],
"features__poly__degree": [1,2,3],
"features__gauss__N": [i for i in range(2,5+1,1)]}
grid_model = GridSearchCV(pipeline, params, scoring='recall', cv=5, verbose=10)
grid_model.fit(X_train, y_train)
print(grid_model.best_score_)
print(grid_model.best_params_)
# pipe = grid_model.best_estimator_
# def transform(pipe, X):
# x = X.copy()
# for i in pipe.steps:
# if i[0] != "model":
# x = i[1].transform(x)
# return x
# RFECV_model = RFECV(grid_model.best_estimator_.named_steps["model"], cv=5, verbose=1)
# RFECV_model.fit(transform(pipe, X_train), y_train)
# Part 3
print(grid_model.score(X_test, y_test))
print(grid_model.best_params_)
model = grid_model.best_estimator_
features = model.named_steps["features"].transformer_list[0][1].get_feature_names(list(X_train)) + ["{}_g{}".format(x,i) for x in X_train for i in range(grid_model.best_params_['features__gauss__N'])]
features = [x for i,x in enumerate(features) if model.named_steps["kbest"].get_support()[i]]
model = model.named_steps["model"]
coef = model.coef_[0]
indexes = features
if model.intercept_:
indexes = ["Intercept"] + indexes
coef = np.concatenate([model.intercept_, coef])
coef_out = pd.DataFrame(np.array([coef, np.abs(coef)]).T, columns=["Coefficients", "abs(Coefficients)"], index=indexes)
coef_out.sort_values(by="abs(Coefficients)", ascending=False)
import matplotlib.pyplot as plt
y_pred = RFECV_model.predict_proba(transform(pipe, X_test))[:,-1]
plt.hist([y_pred[y_test == 0], y_pred[y_test == 1]], stacked=True, bins=20)
# Part 4
F1 = 2.0 * (precision * recall) / (precision + recall)
print("F1 Score:", F1)
FPR = conf_mat.loc["Negative"]["Positive"] / conf_mat["Negative"].sum()
print("False Positive Rate:", FPR)
###Output
_____no_output_____
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
(85 + 36)/(85 +36 + 8 + 58)
###Output
_____no_output_____
###Markdown
Calculate precision
###Code
36 / (36 + 58)
###Output
_____no_output_____
###Markdown
Calculate recall
###Code
36 / (36 + 8)
###Output
_____no_output_____
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate.
###Code
# Which features were selected?
selector = gs.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print()
print('Features not selected:')
for name in unselected_names:
print(name)
# Compare predictions to y_test labels
from sklearn.metrics import recall_score
y_pred = gs.best_estimator_.predict(X_test)
test_score = recall_score(y_test, y_pred)
print('Test Score:', test_score)
from sklearn.feature_selection import RFECV
X_train_scaled = RobustScaler().fit_transform(X_train)
rfe = RFECV(LogisticRegression(solver='lbfgs'), scoring='recall', cv=5)
X_train_subset = rfe.fit_transform(X_train_scaled, y_train)
all_names = X_train.columns
selected_mask = rfe.support_
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print()
print('Features not selected:')
for name in unselected_names:
print(name)
X_train_subset = pd.DataFrame(X_train_subset, columns=selected_names)
X_test_subset = rfe.transform(X_test)
X_test_subset = pd.DataFrame(X_test_subset, columns=selected_names)
print(X_train.shape, X_train_subset.shape, X_test.shape, X_test_subset.shape)
param_grid = {
'selectkbest__k': [1, 2, 3],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [i/10 for i in range(1, 20)]
}
# Fit on the train set, with grid search cross-validation
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5,
scoring='recall',
verbose=1)
gs.fit(X_train_subset, y_train)
validation_score = gs.best_score_
print()
print('Cross-Validation Score:', validation_score)
print()
print('Best estimator:', gs.best_estimator_)
print()
# Compare predictions to y_test labels
from sklearn.metrics import recall_score
y_pred = gs.predict(X_test_subset)
test_score = recall_score(y_test, y_pred)
print('Test Score:', test_score)
gs.scorer_
test_score = gs.score(X_test_subset, y_test)
print('Test Score:', test_score)
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=2)
X_train_polynomial = poly.fit_transform(X_train)
print(X_train.shape, X_train_polynomial.shape)
X_test_polynomial = poly.transform(X_test)
print(X_test.shape, X_test_polynomial.shape)
from sklearn.feature_selection import RFECV
scaler = RobustScaler()
X_train_scaled = scaler.fit_transform(X_train_polynomial)
rfe = RFECV(LogisticRegression(solver='lbfgs'), scoring='recall', cv=5, verbose=1)
X_train_subset = rfe.fit_transform(X_train_scaled, y_train)
X_test_scaled = scaler.transform(X_test_polynomial)
X_test_subset = rfe.transform(X_test_scaled)
X_train.shape, X_train_polynomial.shape, X_train_scaled.shape, X_train_subset.shape
X_test.shape, X_test_polynomial.shape, X_test_scaled.shape, X_test_subset.shape
all_names = poly.get_feature_names(X_train.columns)
selected_mask = rfe.support_
selected_names = [name for name, selected in zip(all_names, selected_mask) if selected]
print(f'{rfe.n_features_} Features selected:')
for name in selected_names:
print(name)
# Define an estimator and param_grid
pipe = make_pipeline(
RobustScaler(),
SelectKBest(f_regression),
LogisticRegression(solver='lbfgs'))
param_grid = {
'selectkbest__k': [1, 2, 3, 4],
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [i/10 for i in range(1, 20)]
}
# Fit on the train set, with grid search cross-validation
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5,
scoring='recall',
verbose=1)
gs.fit(X_train_subset, y_train)
validation_score = gs.best_score_
print()
print('Cross-Validation Score:', validation_score)
print()
print('Best estimator:', gs.best_estimator_)
print()
gs.scorer_
X_test_subset.shape
test_score = gs.score(X_test_subset, y_test)
print('Test Score:', test_score)
###Output
Test Score: 0.7708333333333334
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.**The goal is to predict the last column, whether the donor made a donation in March 2007**, using information about each donor's history. We'll measure success **using recall score** as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
#!pip install seaborn --upgrade
import pandas as pd
import numpy as np
import numpy as np
import pandas as pd
from sklearn.feature_selection import f_classif, SelectKBest
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, recall_score, f1_score
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.dummy import DummyClassifier
from sklearn.ensemble import RandomForestClassifier
import warnings
import time
warnings.filterwarnings('ignore')
import seaborn as sns
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
df.isnull().sum().sum()
X = df.drop(columns='made_donation_in_march_2007')
y = df.made_donation_in_march_2007
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
# Finding Baseline Accuracy using Sklearn
import numpy as np
y_pred = np.full(shape=y.shape, fill_value=y.mode())# TODO
baseline_accuracy = accuracy_score(y, y_pred)
print ('Baseline Accuracy',baseline_accuracy)
# Finding Baseline Accuracy using Dummy Classifier
pipe = make_pipeline(
DummyClassifier(strategy='most_frequent',random_state=42))
pipe.fit(X, y)
# Get the scores with the appropriate score function
# Predict with X features and Compare predictions to y labels
y_pred = pipe.predict(X)
dummy_score = accuracy_score(y, y_pred)
print(y.sum())
print(y_pred.sum())
print('Dummy Classification Score (Accuracy):', dummy_score)
###Output
178
0
Dummy Classification Score (Accuracy): 0.7620320855614974
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
# This question is tricky because using a baseline there would be no false negatives
print(f"Lets Check the value counts so we'll know what our majority class is:\n{y.value_counts()}\n")
print(f"Because our mode is zero and we don't have any false positives on the baseline our recall is zero true postives / (zero true positives + zero false positives))")
# Finding Baseline Recall using Sklearn
import numpy as np
y_pred = np.full(shape=y.shape, fill_value=y.mode())# TODO
baseline_recall = recall_score(y, y_pred)
print ('Baseline Recall',baseline_recall)
# Finding Baseline Recall Using Dummy Classifier
pipe = make_pipeline(
DummyClassifier(strategy='most_frequent',random_state=42))
pipe.fit(X, y)
# Get the scores with the appropriate score function
# Predict with X features and Compare predictions to y labels
y_pred = pipe.predict(X)
dummy_score = recall_score(y, y_pred)
print('Dummy Classification Score (recall):', dummy_score)
###Output
Dummy Classification Score (recall): 0.0
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
def split(X_values, y_values):
# Hold out an "out-of-time" test set, from the last 100 days of data
X_train, X_test, y_train, y_test = train_test_split(X_values, y_values, test_size=0.25, random_state=42)
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = split(X,y)
print ("Took this data...")
print (f'X Shape: {X.shape}\nY Shape: {y.shape}\n\n')
print ("And split it into this data... ")
print (f'X_train Shape: {X_train.shape},\nX_test Shape: {X_test.shape},\ny_train Shape: {y_train.shape},\ny_test Shape: {y_test.shape}')
###Output
Took this data...
X Shape: (748, 4)
Y Shape: (748,)
And split it into this data...
X_train Shape: (561, 4),
X_test Shape: (187, 4),
y_train Shape: (561,),
y_test Shape: (187,)
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
pipe = make_pipeline(
RobustScaler(),
SelectKBest(f_classif),
LogisticRegression())
print("Pipeline Created")
###Output
Pipeline Created
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
param_grid = {
'selectkbest__k': [1,2,3,4],
'logisticregression__class_weight' : [None, 'balanced'],
'logisticregression__C' : [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
# Fit on the train set, with grid search cross-validation
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5,
scoring='recall_weighted',
verbose=1)
print("Grid Search Cross Validation now running...")
gs.fit(X_train, y_train)
print("Grid Search CV complete...")
###Output
Grid Search Cross Validation now running...
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
# Scores for the Question's Answer
validation_score = gs.best_score_
print('***** Grid Search Scores *****')
print('\nBest Cross-Validation Score:', validation_score)
print('Best parameters:', gs.best_params_ ,'\n')
# Scores for me.
print("\n***** A few more stats just for me *****")
# Predict with X_test features and compare to actual.
y_pred_train = gs.predict(X_train)
train_score_A = accuracy_score(y_train, y_pred_train)
print('Train Score (Accuracy):', train_score_A)
y_pred_test = gs.predict(X_test)
test_score_A = accuracy_score(y_test, y_pred_test)
print('Test Score (Accuracy):', test_score_A)
train_score_B = gs.score(X_train, y_train)
print('\nTrain Score ("Recall"): ', train_score_B)
test_score_B = gs.score(X_test, y_test)
print('Test Score ("Recall"): ', test_score_B)
print('\nBest estimator:\n', gs.best_estimator_)
cvresults = pd.DataFrame(gs.cv_results_)
print('\nGenerated Results with Shape:', cvresults.shape)
###Output
***** Grid Search Scores *****
Best Cross-Validation Score: 0.7807486631016043
Best parameters: {'logisticregression__C': 1.0, 'logisticregression__class_weight': None, 'selectkbest__k': 4}
***** A few more stats just for me *****
Train Score (Accuracy): 0.7789661319073083
Test Score (Accuracy): 0.7540106951871658
Train Score ("Recall"): 0.7789661319073083
Test Score ("Recall"): 0.7540106951871658
Best estimator:
Pipeline(memory=None,
steps=[('robustscaler', RobustScaler(copy=True, quantile_range=(25.0, 75.0), with_centering=True,
with_scaling=True)), ('selectkbest', SelectKBest(k=4, score_func=<function f_classif at 0x7fe5d18f2158>)), ('logisticregression', LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='warn',
n_jobs=None, penalty='l2', random_state=None, solver='warn',
tol=0.0001, verbose=0, warm_start=False))])
Generated Results with Shape: (72, 23)
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36
###Code
true_negative = 85
true_positive = 36
false_negative = 8
false_positive = 58
###Output
_____no_output_____
###Markdown
Calculate accuracy
###Code
accuracy = (true_negative + true_positive) / (true_negative + true_positive + false_negative + false_positive)
print(accuracy)
###Output
0.6470588235294118
###Markdown
Calculate precision
###Code
precision = true_positive / (true_positive + false_positive)
print(precision)
###Output
0.3829787234042553
###Markdown
Calculate recall
###Code
recall = true_positive / (true_positive + false_negative)
print(recall)
###Output
0.8181818181818182
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score.
###Code
df["new_variable"] = (df["months_since_first_donation"] - df["months_since_last_donation"])
X = df.drop(columns='made_donation_in_march_2007')
y = df.made_donation_in_march_2007
# Split Data
X_train, X_test, y_train, y_test = split(X,y)
print ("Took this data...")
print (f'X Shape: {X.shape}\nY Shape: {y.shape}\n\n')
print ("And split it into this data... ")
print (f'X_train Shape: {X_train.shape},\nX_test Shape: {X_test.shape},\ny_train Shape: {y_train.shape},\ny_test Shape: {y_test.shape}')
# Make Pipeline
pipe = make_pipeline(
RobustScaler(),
SelectKBest(f_classif),
LogisticRegression())
print("\nPipeline Created")
# GridSearch CV
param_grid = {
'selectkbest__k': [1,2,3,4],
'logisticregression__class_weight' : [None, 'balanced'],
'logisticregression__C' : [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
# Fit on the train set, with grid search cross-validation
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5,
scoring='recall_weighted',
verbose=1)
print("\nGrid Search Cross Validation now running...")
gs.fit(X_train, y_train)
print("Grid Search CV complete...")
# Check out the scores
new_validation_score = gs.best_score_
print('\n***** Grid Search Scores *****')
print('New Cross-Validation Score:', new_validation_score)
print('Previous Cross-Validation Score', validation_score)
print('Best parameters:', gs.best_params_ ,'\n')
# Scores for me.
print("\n***** A few more stats just for me *****")
# Predict with X_test features and compare to actual.
y_pred_train = gs.predict(X_train)
train_score_C = accuracy_score(y_train, y_pred_train)
print('Train Score (Accuracy):', train_score_C)
y_pred_test = gs.predict(X_test)
test_score_C = accuracy_score(y_test, y_pred_test)
print('Test Score (Accuracy):', test_score_C)
train_score_D = gs.score(X_train, y_train)
print('\nTrain Score ("Recall"): ', train_score_D)
test_score_D = gs.score(X_test, y_test)
print('Test Score ("Recall"): ', test_score_D)
###Output
Took this data...
X Shape: (748, 5)
Y Shape: (748,)
And split it into this data...
X_train Shape: (561, 5),
X_test Shape: (187, 5),
y_train Shape: (561,),
y_test Shape: (187,)
Pipeline Created
Grid Search Cross Validation now running...
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score.
###Code
# I'm going to try a slightly different one this time. I want to try random forest classifier.
# Make Pipeline
pipe = make_pipeline(
RobustScaler(),
SelectKBest(f_classif),
RandomForestClassifier())
print("\nPipeline Created")
# GridSearch CV
param_grid = {
'selectkbest__k': [1,2,3,4],
"randomforestclassifier__max_depth": [80, 90, 100, 110],
# "randomforestclassifier__max_features": [2, 3],
"randomforestclassifier__min_samples_split": [8, 10, 12],
"randomforestclassifier__min_samples_leaf": [3, 4, 5],
"randomforestclassifier__bootstrap": [False],
"randomforestclassifier__n_estimators" :[100, 200, 300, 1000],
"randomforestclassifier__criterion": ["gini"]}
# Fit on the train set, with grid search cross-validation
gs = GridSearchCV(pipe, param_grid=param_grid, cv=10,
scoring='recall_weighted',
verbose=1, n_jobs=10)
print("\nGrid Search Cross Validation now running...")
gs.fit(X_train, y_train)
print("Grid Search CV complete...")
# Check out the scores
new_validation_score = gs.best_score_
print('\n***** Grid Search Scores *****')
print('New Cross-Validation Score:', new_validation_score)
print('Previous Cross-Validation Score', validation_score)
print('Best parameters:', gs.best_params_ ,'\n')
print('Best score:', gs.best_score_)
print('Best estimator:', gs.best_estimator_)
###Output
Pipeline Created
Grid Search Cross Validation now running...
Fitting 10 folds for each of 576 candidates, totalling 5760 fits
###Markdown
Lets see how it's test stats turned out
###Code
y_pred_train = gs.predict(X_train)
train_score_C = accuracy_score(y_train, y_pred_train)
print('Random Forest Classifier - Train Score (Accuracy):', train_score_C)
y_pred_test = gs.predict(X_test)
test_score_C = accuracy_score(y_test, y_pred_test)
print('Random Forest Classifier - Test Score (Accuracy):', test_score_C)
train_score_D = gs.score(X_train, y_train)
print('\nRandom Forest Classifier - Train Score ("Recall"): ', train_score_D)
test_score_D = gs.score(X_test, y_test)
print('Random Forest Classifier - Test Score ("Recall"): ', test_score_D)
###Output
Random Forest Classifier - Train Score (Accuracy): 0.8645276292335116
Random Forest Classifier - Test Score (Accuracy): 0.7379679144385026
Random Forest Classifier - Train Score ("Recall"): 0.8645276292335116
Random Forest Classifier - Test Score ("Recall"): 0.7379679144385026
###Markdown
Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score?
###Code
# Which features were selected?
# 'selectkbest' is the autogenerated name of the SelectKBest() function in the pipeline
selector = gs.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
# get_support returns a mask of the columns in True / False
selected_mask = selector.get_support()
# Passing the boolean list as the column names creates a
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print()
print('Features not selected:')
for name in unselected_names:
print(name)
###Output
Features selected:
months_since_last_donation
number_of_donations
total_volume_donated
new_variable
Features not selected:
months_since_first_donation
###Markdown
Part 4Calculate F1 score and False Positive Rate.
###Code
from sklearn.metrics import confusion_matrix
y_pred_train = gs.predict(X_train)
train_score_F1 = f1_score(y_train, y_pred_train)
print('Random Forest Classifier - F1 Train Score:', train_score_F1)
y_pred_test = gs.predict(X_test)
test_score_F1 = f1_score(y_test, y_pred_test)
print('Random Forest Classifier - F1 Test Score:', test_score_F1)
tn, fp, fn, tp = confusion_matrix(y_train, y_pred_train).ravel()
print(f'Random Forest Classifier - Train False Positive Rate: {fp / (fp + tp)}')
tn, fp, fn, tp = confusion_matrix(y_test, y_pred_test).ravel()
print(f'Random Forest Classifier - Test False Positive Rate: {fp / (fp + tp)}')
###Output
Random Forest Classifier - F1 Train Score: 0.6576576576576576
Random Forest Classifier - F1 Test Score: 0.3466666666666667
Random Forest Classifier - Train False Positive Rate: 0.20652173913043478
Random Forest Classifier - Test False Positive Rate: 0.5185185185185185
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
# All imports in one place whether I need them or not
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
import category_encoders as ce
from sklearn import metrics
from sklearn.datasets import make_classification
from sklearn.metrics import classification_report
from sklearn.metrics import recall_score
from mlxtend.plotting import plot_decision_regions
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import cross_val_score
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
# Defining the X and y variables
X = df.drop(columns = 'made_donation_in_march_2007')
y = df['made_donation_in_march_2007']
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
# Using accuracy score
majority_class = y.mode()[0]
y_pred = np.full(shape=y.shape, fill_value=majority_class)
accuracy_score(y, y_pred)
# Using value_counts(normalize)
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.) recall score is the **True Pos**/**(True Pos + False Neg)**
###Code
recall_score(y, y_pred)
# This gives us the recall score in the report along with valuable other information
print(classification_report(y, y_pred))
###Output
precision recall f1-score support
0 0.76 1.00 0.86 570
1 0.00 0.00 0.00 178
micro avg 0.76 0.76 0.76 748
macro avg 0.38 0.50 0.43 748
weighted avg 0.58 0.76 0.66 748
###Markdown
The recall score here is zero. Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
# split data into train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, test_size=0.25,
random_state=42, shuffle=True)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
# Defining the selector and using the Standard Scaler
# selector = SelectKBest(f_classif)
pipeline = make_pipeline(
StandardScaler(),
SelectKBest(f_classif),
LogisticRegression(solver='lbfgs')
)
# pipeline.fit(X_train, y_train)
# y_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
param_grid = {
'selectkbest__k': [1,2,3,4],
'logisticregression__class_weight' : [None, 'balanced'],
'logisticregression__C' : [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
# Five folds not the four I previously had.
gridsearch = GridSearchCV(pipeline, param_grid = param_grid,
scoring = 'recall', cv = 5,
return_train_score = True, verbose = 5)
gridsearch.fit(X_train, y_train)
# I keep getting an error for C. I've looked at documentation and examples and
# I don't know how else to put it in the parameters.
# It actually wasn't my C value at all. Finally got it.
pd.DataFrame(gridsearch.cv_results_).sort_values(by = 'rank_test_score')
###Output
_____no_output_____
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
results = pd.DataFrame(gridsearch.cv_results_)
print(f'Best result from grid search of {len(results)} parameter combinations', '\n')
print('Best Cross-Validation Score: ',gridsearch.best_score_, '\n')
print('The values of C, the class_weight, and k are:')
gridsearch.best_params_
###Output
Best result from grid search of 72 parameter combinations
Best Cross-Validation Score: 0.784519402166461
The values of C, the class_weight, and k are:
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
# True Pos plus True Neg over the total
(36+85)/187
###Output
_____no_output_____
###Markdown
Calculate precision
###Code
# True Pos over true pos plus false pos
36/94
###Output
_____no_output_____
###Markdown
Calculate recall
###Code
# True Pos over over True Pos plus False Neg
36/(36+8)
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.metrics import accuracy_score, classification_report
from sklearn.linear_model import LogisticRegression
from mlxtend.plotting import plot_decision_regions
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
df.columns
df.shape
###Output
_____no_output_____
###Markdown
Majority Class Baseline
###Code
import numpy as np
majority_class = df['made_donation_in_march_2007'].mode()[0]
#print(majority_class)
y_pred = np.full(shape=df['made_donation_in_march_2007'].shape, fill_value=majority_class )
df['made_donation_in_march_2007'].shape, y_pred.shape
#cheking what's the mode of out y
df['made_donation_in_march_2007'].mode()[0]
all(y_pred==majority_class)
###Output
_____no_output_____
###Markdown
Majority Class baseline accuracy score
###Code
from sklearn.metrics import accuracy_score
accuracy_score(df['made_donation_in_march_2007'], y_pred)
###Output
_____no_output_____
###Markdown
Class imbanace
###Code
df['made_donation_in_march_2007'].value_counts()
df['made_donation_in_march_2007'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
print(classification_report(df['made_donation_in_march_2007'], y_pred))
###Output
precision recall f1-score support
0 0.76 1.00 0.86 570
1 0.00 0.00 0.00 178
micro avg 0.76 0.76 0.76 748
macro avg 0.38 0.50 0.43 748
weighted avg 0.58 0.76 0.66 748
###Markdown
My undersatanding of recall:Recall: When it's yes, how many times it predicts yes. Also, known as True Postive rate or sensitivity. Some feature engineering
###Code
#let's reload the dataframe again
df.head()
#df['average_volume'] = df['total_volume_donated'] /df['number_of_donations']
df['everage_freq'] = df['number_of_donations']/df['months_since_first_donation']
df['number_of_donations_squared'] = df['number_of_donations']**2
df.everage_freq.shape, df.number_of_donations_squared.shape
df.head()
###Output
_____no_output_____
###Markdown
Hmm..average donation or average volumn turned out to be a redundant column. The amount was alwasy 250. We will drop these columns
###Code
#df =df.drop(columns=['average_volume'])
df.head()
###Output
_____no_output_____
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
X =df.drop(columns='made_donation_in_march_2007')
y = df['made_donation_in_march_2007']
X.shape, y.shape
X.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, shuffle=True)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import cross_val_score
from sklearn.feature_selection import f_classif, SelectKBest
pipeline = make_pipeline(StandardScaler(), SelectKBest(f_classif, k=4), LogisticRegression())
pipeline.fit(X_train, y_train)
scores = cross_val_score(pipeline, X_train, y_train,scoring='accuracy', cv = 10)
scores
scores.mean(), scores.std()
###Output
_____no_output_____
###Markdown
Feature Engineering Result: It shows some inprovement in our modle. But it's not very different from our baseline. Just a few points better. Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
pipeline = make_pipeline(StandardScaler(),
SelectKBest(f_classif),
LogisticRegression())
param_grid = {
'selectkbest__k': [1,2,3,4],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0],
'logisticregression__class_weight':[None, 'balanced']
}
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import recall_score
gs = GridSearchCV(pipeline,param_grid=param_grid, cv=5,
scoring='recall',
verbose=1)
gs.fit(X_train, y_train)
import sklearn
sorted(sklearn.metrics.SCORERS.keys())
###Output
_____no_output_____
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
validation_score = gs.best_score_
print()
print('Cross-Validation Score:', -validation_score)
print()
print('Best parameters:', gs.best_params_)
print()
###Output
Cross-Validation Score: -0.7922665569724393
Best parameters: {'logisticregression__C': 0.0001, 'logisticregression__class_weight': 'balanced', 'selectkbest__k': 2}
###Markdown
Feature Engineering dropped the score here as well a litle bit. Let's see what features were selectedSurprisingly, the feature that we engineered was selected as an important feature with relevant information. All other four columns were redundant accroding to GridSearch method.
###Code
selector = gs.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print()
print('Features not selected:')
for name in unselected_names:
print(name)
###Output
Features selected:
months_since_last_donation
everage_freq
Features not selected:
number_of_donations
total_volume_donated
months_since_first_donation
number_of_donations_squared
###Markdown
Moment of True! We are going to test it on our test dataset
###Code
y_pred = gs.predict(X_test)
test_score = gs.score(X_test, y_test)
print('Test Score:', -test_score)
###Output
Test Score: -0.7916666666666666
###Markdown
Wahoo! it's a win! Our training model showed the accuracy score of .079166 and the test scores are also the same! That's the maximum prediction we could get. Also, it's a little better than our baseline model which had the accuracy score of 0.76... Let's Print precision and Recallfor Class 1 (actaul blood donar):Our model is certainly better than our baseline model. OUr precision is 0.35 which is not great at all.While recall is better at 0.78. So at least whoever is going to donate, our model predcits that 78% of those people (who are donar and qualify for class 1) will donate. This prediction although is only 35% precise. There is always a tradeoff between precison and recall.
###Code
print(classification_report(y_train, gs.predict(X_train)))
###Output
precision recall f1-score support
0 0.89 0.57 0.69 431
1 0.35 0.78 0.48 130
micro avg 0.61 0.61 0.61 561
macro avg 0.62 0.67 0.59 561
weighted avg 0.77 0.61 0.64 561
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
(85 + 36) /(85+58 + 8 +36)
###Output
_____no_output_____
###Markdown
Calculate precision
###Code
36 / (36+58)
###Output
_____no_output_____
###Markdown
Calculate recall
###Code
36 /(8 +36)
###Output
_____no_output_____
###Markdown
False Postive Rate: When it's actually no, how often does it predict yes?
###Code
58 / (85+58)
###Output
_____no_output_____
###Markdown
F Score 2* (precision*recall) /(precision+recall)
###Code
2 * (0.3829787*0.8181818) / (0.3829787 + 0.8181818)
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
# all imports here
import pandas as pd
from sklearn.metrics import accuracy_score
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import RobustScaler
from sklearn.metrics import recall_score
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
#determine majority class
df['made_donation_in_march_2007'].value_counts(normalize=True)
# Guess the majority class for every prediction:
majority_class = 0
y_pred = [majority_class] * len(df['made_donation_in_march_2007'])
#accuracy score same as majority class, because dataset not split yet
accuracy_score(df['made_donation_in_march_2007'], y_pred)
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
###Code
#when it is actually yes, how often do you predict yes? 0, because always predicting no
# recall = true_positive / actual_positive
###Output
_____no_output_____
###Markdown
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
#split data
X = df.drop(columns='made_donation_in_march_2007')
y = df['made_donation_in_march_2007']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
#validate 75% in train set
X_train.shape
#validate 25% in test set
X_test.shape
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
#make pipeline with 3 prerequisites
kbest = SelectKBest(f_regression)
pipeline = Pipeline([('scale', StandardScaler()),('kbest', kbest), ('lr', LogisticRegression(solver='lbfgs'))])
pipe = make_pipeline(RobustScaler(),SelectKBest(),LogisticRegression(solver='lbfgs'))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
param_grid = {'selectkbest__k':[1,2,3,4],'logisticregression__class_weight':[None,'balanced'],'logisticregression__C':[.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]}
gs = GridSearchCV(pipe,param_grid,cv=5,scoring='recall')
gs.fit(X_train, y_train)
# grid_search = GridSearchCV(pipeline, { 'lr__class_weight': [None,'balanced'],'kbest__k': [1,2,3,4], 'lr__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]},scoring='recall', cv=5,verbose=1)
# grid_search.fit(X_train, y_train)
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_search.py:841: DeprecationWarning: The default of the `iid` parameter will change from True to False in version 0.22 and will be removed in 0.24. This will change numeric results when test-set sizes are unequal.
DeprecationWarning)
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
validation_score = gs.best_score_
print()
print('Cross-Validation Score:', -validation_score)
print()
print('Best estimator:', gs.best_estimator_)
print()
gs.best_estimator_
# Cross-Validation Score: -0.784519402166461
# best parameters: k=1,C=0.0001,class_weight=balanced
###Output
_____no_output_____
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36
###Code
true_negative = 85
false_positive = 58
false_negative = 8
true_positive = 36
predicted_positive = 58+36
actual_positive = 8 + 36
###Output
_____no_output_____
###Markdown
Calculate accuracy
###Code
accuracy = (true_negative + true_positive) / (true_negative + false_positive +false_negative + true_positive)
print ('Accuracy:', accuracy)
###Output
Accuracy: 0.6470588235294118
###Markdown
Calculate precision
###Code
precision = true_positive / predicted_positive
print ('Precision:', precision)
###Output
Precision: 0.3829787234042553
###Markdown
Calculate recall
###Code
recall = true_positive / actual_positive
print ('Recall:', recall)
###Output
Recall: 0.8181818181818182
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate.
###Code
# # Which features were selected?
selector = gs.best_estimator_.named_steps['selectkbest']
all_names = X_train.columns
selected_mask = selector.get_support()
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print()
print('Features not selected:')
for name in unselected_names:
print(name)
# Predict with X_test features
y_pred = grid_search.predict(X_test)
# Compare predictions to y_test labels
test_score = recall_score(y_test, y_pred)
print('Test Score:', test_score)
f1 = 2*precision*recall/(precision+recall)
print('f1:', f1)
false_positive_rate = false_positive / (false_positive+true_negative)
print('False Positive Rate:', false_positive_rate)
###Output
_____no_output_____
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
# Tools:
import numpy as np
import pandas as pd
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
df.head()
df["made_donation_in_march_2007"].value_counts()
df["made_donation_in_march_2007"].shape
# Majority class / number of observations gives us accuracy score:
570 / 748
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
# Majority class baseline using libraries:
from sklearn.metrics import accuracy_score
import numpy as np
majority_class = df["made_donation_in_march_2007"].mode()[0]
y_pred = np.full((748,), fill_value=majority_class)
y_true = df["made_donation_in_march_2007"]
accuracy_score(y_true, y_pred)
###Output
_____no_output_____
###Markdown
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.) ---Recall in this specific model boils down to: "when it predicts 'No Donation', how often is it correct?".In other words, Recall = correct non-donation predictions / number of no-donation predictionsRecall = 570 / 748 = about 76%In our majority class baseline, Recall = Accuracy score. --- Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
from sklearn.model_selection import train_test_split
# Splitting data into train, test sets:
X = df.drop("made_donation_in_march_2007", axis=1)
y = df["made_donation_in_march_2007"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, shuffle=True)
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
# Ensuring we have no nulls and all numeric features for Scikit-learn:
def no_nulls(df):
return not any(df.isnull().sum())
def all_numeric(df):
from pandas.api.types import is_numeric_dtype
return all(is_numeric_dtype(df[col]) for col in df)
no_nulls(X_train), all_numeric(X_train)
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import cross_val_score
# Define an estimator and param_grid
pipe = make_pipeline(
RobustScaler(),
SelectKBest(f_regression),
LogisticRegression(solver='lbfgs'))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
# Setting up Parameter Grid:
param_grid = {
'selectkbest__k': (1,2,3,4),
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [0.0001, 0.001, 0.01, 0.1, 1.0, 100.00, 1000.0, 10000.0]
}
# Fitting on the train set with GSCV:
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5,
scoring='neg_mean_absolute_error',
verbose=1)
gs.fit(X_train, y_train)
val_score = gs.best_score_
###Output
Fitting 5 folds for each of 64 candidates, totalling 320 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
print('Cross-Validation Score:', -val_score)
print('\n Best estimator:', gs.best_estimator_)
###Output
Cross-Validation Score: 0.2192513368983957
Best estimator: Pipeline(memory=None,
steps=[('robustscaler', RobustScaler(copy=True, quantile_range=(25.0, 75.0), with_centering=True,
with_scaling=True)), ('selectkbest', SelectKBest(k=4, score_func=<function f_regression at 0x7f2c754b9620>)), ('logisticregression', LogisticRegression(C=1.0, class_weight=None, dual=False, fit_i...enalty='l2', random_state=None, solver='lbfgs',
tol=0.0001, verbose=0, warm_start=False))])
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36
###Code
TP = 36
TN = 85
FP = 58
FN = 8
Total = TP + TN + FP + FN
###Output
_____no_output_____
###Markdown
Calculate accuracy
###Code
accuracy = (TP+TN) / Total
print("Accuracy = {}".format(accuracy))
###Output
Accuracy = 0.6470588235294118
###Markdown
Calculate precision
###Code
precision = TP/(FP+TP)
print("Precision = {}".format(precision))
###Output
Precision = 0.3829787234042553
###Markdown
Calculate recall
###Code
# Recall is the True Positive Rate (aka 'Sensitivity'):
recall = TP/(TP+FN)
print("Recall = {}".format(recall))
###Output
Recall = 0.8181818181818182
###Markdown
---**BONUS:** Calculate F1 Score
###Code
F1_score = ((precision*recall)/(precision+recall))*2
print("F1 Score = {}".format(F1_score))
###Output
F1 Score = 0.5217391304347826
###Markdown
Calculate False Positive Rate
###Code
false_positive_rate = FP/(FP+TN)
print("False Positive Rate = {}".format(false_positive_rate))
###Output
False Positive Rate = 0.40559440559440557
###Markdown
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
###Code
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
df = df.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
df.head()
df.made_donation_in_march_2007.value_counts()
###Output
_____no_output_____
###Markdown
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
###Code
# Calculate total donations in march 2007 yes and nos.
df.made_donation_in_march_2007.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
So the accuracy of the prediction is ~76.2%, as we found that the actual number of yes did donate is 76.2% of the actual total observations What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.) The recall score is the percentage of actual outcomes that matches the predicted outcomes.
###Code
# Setting our X and y variables for later model calculations
X = df.drop(columns=['made_donation_in_march_2007'])
y = df.made_donation_in_march_2007
# Logistic Regression model using X and y, to see if we can more accurately predict donations(y)
from sklearn.metrics import accuracy_score, classification_report
model = LogisticRegression(solver='lbfgs', class_weight=class_weight)
model.fit(X, y)
y_pred = model.predict(X)
print(classification_report(y, y_pred))
print('accuracy', accuracy_score(y, y_pred))
###Output
precision recall f1-score support
0 0.78 0.97 0.87 570
1 0.59 0.12 0.20 178
micro avg 0.77 0.77 0.77 748
macro avg 0.69 0.55 0.54 748
weighted avg 0.74 0.77 0.71 748
accuracy 0.7713903743315508
###Markdown
Using a LogisticRegression model we improved our prediction accuracy from 76.2% to 77.1% Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
###Code
# Split data into training and test sets for X and y
X_train = df.drop(columns='made_donation_in_march_2007')
y_train = df.made_donation_in_march_2007
X_test = df.drop(columns='made_donation_in_march_2007')
y_test = df.made_donation_in_march_2007
# setting the size of the test dataset as well as a seed
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=15)
###Output
_____no_output_____
###Markdown
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
###Code
# importing libraries I may need
from sklearn.feature_selection import f_regression, SelectKBest, f_classif
from sklearn.linear_model import Ridge, LogisticRegression, LogisticRegressionCV
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
# Defining my pipeline settings
pipe = make_pipeline(
RobustScaler(),
SelectKBest(f_classif),
LogisticRegression(solver='lbfgs'))
###Output
_____no_output_____
###Markdown
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
###Code
# adding parameters to the grid for GS, then fitting this model
param_grid = {
'selectkbest__k': range(1, len(X_train.columns)),
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0],
'logisticregression__class_weight': [None, 'balanced']
}
# Fit on the train set, with grid search cross-validation
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5,
scoring='neg_mean_absolute_error',
verbose=1)
gs.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 54 candidates, totalling 270 fits
###Markdown
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
###Code
# Printing best score and best parameters
validation_score = gs.best_score_
print()
print('Cross-Validation Score:', -validation_score)
print()
print('Best estimator:', gs.best_estimator_)
print()
###Output
Cross-Validation Score: 0.22994652406417113
Best estimator: Pipeline(memory=None,
steps=[('robustscaler', RobustScaler(copy=True, quantile_range=(25.0, 75.0), with_centering=True,
with_scaling=True)), ('selectkbest', SelectKBest(k=2, score_func=<function f_classif at 0x7fda58a42ea0>)), ('logisticregression', LogisticRegression(C=0.1, class_weight=None, dual=False, fit_inte...enalty='l2', random_state=None, solver='lbfgs',
tol=0.0001, verbose=0, warm_start=False))])
###Markdown
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Calculate accuracy
###Code
# manual calculating accuracy from confusion matrix
# ([1, 1] + [1, 2]) / ([1, 1] + [1, 2] + [2, 1] + [2, 2])
(85 + 58) / (85 + 58 + 8 + 36)
###Output
_____no_output_____
###Markdown
Calculate precision
###Code
# precision from confusion matrix
# [2, 2] / ([2, 2] + [1, 2])
36 / (36 + 58)
###Output
_____no_output_____
###Markdown
Calculate recall
###Code
# recall score from confusion matrix
# [2, 2] / ([2, 1] + [2, 2])
36 / (8 + 36)
###Output
_____no_output_____
###Markdown
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate.
###Code
# installing cetegory encoders for feature enhancing
!pip install category_encoders
# import ce package
import category_encoders as ce
# new pipeline with some preprocessing
from sklearn.preprocessing import StandardScaler
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
StandardScaler(),
LogisticRegression(solver='lbfgs')
)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
accuracy_score(y_test, y_pred)
# Setting grid options for GS
param_grid = {
'logisticregression__class_weight': [None, 'balanced'],
'logisticregression__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]
}
# Fit on the train set, with grid search cross-validation
gs = GridSearchCV(pipeline, param_grid=param_grid, cv=5,
scoring='neg_mean_absolute_error',
verbose=1)
gs.fit(X_train, y_train)
# printing best score and estimator again
validation_score = gs.best_score_
print()
print('Cross-Validation Score:', -validation_score)
print()
print('Best estimator:', gs.best_estimator_)
print()
###Output
Cross-Validation Score: 0.33326694272908614
Best estimator: Ridge(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=None,
normalize=False, random_state=None, solver='auto', tol=0.001)
|
scripts/generate.ipynb | ###Markdown
gRPC bridge generator for ROSThis notebook generates a gRPC server implementing the available ROS topics and services. Please see the [README](/README.md) for more detail. Steps: - [Snapshot ROS topics and serivces](snapshot) - [Load snapshots](load) - [Generate the .proto file](proto) - [Generate the gRPC server](server)
###Code
import rospy
import rosmsg
import rostopic
import rosservice
import os
import re
import io
import configparser
import doctest
from cookiecutter.main import cookiecutter
import collections
# Options
generator_pkg_dir = os.path.abspath('..')
default_pkg_dir = os.path.abspath(os.path.join(generator_pkg_dir, '..'))
TEMPLATE = os.path.abspath(os.path.join(generator_pkg_dir, './template'))
def get_param(name, default):
value = rospy.get_param(name, default)
print('param', name, value)
if value == "":
return default
return value
PKG_NAME = get_param('pkg_name', 'grpc_bridge')
PKGS_ROOT = get_param('pkgs_root', default_pkg_dir)
PKG_PATH = os.path.join(PKGS_ROOT, PKG_NAME)
PKG_SRC_PATH = os.path.join(PKG_PATH, 'src')
PROTO_FILE = os.path.join(PKG_PATH, 'ros.proto')
SNAPSHOT_PATH = get_param('snapshot_path', os.path.join(PKG_PATH, 'snapshot.ini'))
KEEP_EXISTING_SNAPSHOT = get_param('keep_existing_snapshot', False) is True
if KEEP_EXISTING_SNAPSHOT and not os.path.exists(SNAPSHOT_PATH):
raise OSError('Can\'t find snapshot file "{}"'.format(SNAPSHOT_PATH))
print("PKG_NAME =", PKG_NAME)
print("PKG_PATH =", PKG_PATH)
print("SNAPSHOT_PATH =", SNAPSHOT_PATH)
print("KEEP_EXISTING_SNAPSHOT =", KEEP_EXISTING_SNAPSHOT)
print("PROTO_FILE =", PROTO_FILE)
print("TEMPLATE =", TEMPLATE)
cookiecutter(TEMPLATE,
output_dir=PKGS_ROOT,
overwrite_if_exists=True,
no_input=True,
extra_context={'pkg_name': PKG_NAME})
###Output
_____no_output_____
###Markdown
Utils
###Code
def write_file(path, content):
folder = os.path.dirname(path)
if not os.path.exists(folder):
os.makedirs(folder)
f = open(os.path.join(path), 'w+')
f.write(content)
f.close()
print("file was written to {}".format(path))
scalar_ros2pb = {
'bool': 'bool',
'int8': 'int32',
'uint8': 'uint32',
'int16': 'int32',
'uint16': 'uint32',
'int32': 'int32',
'uint32': 'uint32',
'int64': 'int64',
'uint64': 'uint64',
'float32': 'float',
'float64': 'double',
'string': 'string',
'time': 'Time',
'duration': 'Duration',
'char': 'uint32',
'byte': 'int32',
}
# TODO remame strip_ros_array_notation
def strip_array_notation(name):
"""Remove array notation from ROS msg types
>>> strip_array_notation('Foo[]')
'Foo'
>>> strip_array_notation('Foo[123]')
'Foo'
>>> strip_array_notation('Foo')
'Foo'
"""
return re.sub(r'\[\d*\]$', '', name)
def is_binary_ros_type(package, typename, is_array):
"""True if the type should be represented as bytes in protobuf
>>> is_binary_ros_type(None, 'uint8', True)
True
>>> is_binary_ros_type('some_msgs', 'Foo', True)
False
>>> is_binary_ros_type(None, 'char', False)
False
"""
return (typename == 'uint8' or typename == 'char') and (package is None) and (is_array is True)
def type_ros2pb(ros_type):
"""Convert ROS msg type to protobuff type
>>> type_ros2pb('byte')
'int32'
>>> type_ros2pb('std_msgs/Header')
'std_msgs.Header'
>>> type_ros2pb('string[]')
'repeated string'
>>> type_ros2pb('time')
'Time'
>>> type_ros2pb('string')
'string'
>>> type_ros2pb('uint32')
'uint32'
>>> type_ros2pb('uint8')
'uint32'
>>> type_ros2pb('uint8[]')
'bytes'
>>> type_ros2pb('char[3]')
'bytes'
"""
package, typename, is_array = parse_ros_type(ros_type)
# use bytes for ROS uint8 and char arrays (same way as the python message generator)
if is_binary_ros_type(package, typename, is_array):
return 'bytes'
pb_type = typename if package is None else '{}.{}'.format(package, typename)
if pb_type in scalar_ros2pb:
pb_type = scalar_ros2pb[pb_type]
if is_array:
pb_type = 'repeated ' + pb_type
return pb_type
def grpc_service_name(ros_name):
"""Convert ROS topic or service names to valid protobuff names (replace slash with underscore)
>>> grpc_service_name('/rosout')
'rosout'
>>> grpc_service_name('/rosout_agg')
'rosout_agg'
>>> grpc_service_name('/turtle1/pose')
'turtle1_pose'
>>> grpc_service_name('/turtle1/color_sensor')
'turtle1_color_sensor'
"""
return ros_name.replace('/', '_')[1:]
def parse_ros_type(ros_type):
"""Convert a line in a ROS msg file to (package, typename, is_array)
>>> parse_ros_type('rosgraph_msgs/Log')
('rosgraph_msgs', 'Log', False)
>>> parse_ros_type('turtlesim/Pose')
('turtlesim', 'Pose', False)
>>> parse_ros_type('uint32')
(None, 'uint32', False)
>>> parse_ros_type('time')
(None, 'time', False)
>>> parse_ros_type('rosgraph_msgs/Log[]')
('rosgraph_msgs', 'Log', True)
"""
is_array = ros_type.endswith(']')
ros_type = strip_array_notation(ros_type)
if '/' in ros_type:
package, typename = ros_type.split('/')
return package, typename, is_array
else:
return None, ros_type, is_array
doctest.testmod()
###Output
_____no_output_____
###Markdown
Snapshot
###Code
if not KEEP_EXISTING_SNAPSHOT:
snapshot = configparser.ConfigParser()
# enable case-sensitive keys (so ROS msg types can be keys)
snapshot.optionxform=str
# list all topics, services, and their message definition in the snapshot
snapshot['MESSAGE_DEFINITIONS'] = {}
snapshot['TOPICS'] = {}
snapshot['SERVICES'] = {}
published, subscribed = rostopic.get_topic_list()
published_topics = dict(map(lambda x: (x[0], x[1]), (published + subscribed))).items()
for (message_name, ros_type) in published_topics:
snapshot['TOPICS'][message_name] = ros_type
snapshot['MESSAGE_DEFINITIONS'][ros_type] = rosmsg.get_msg_text(ros_type)
services = rosservice.get_service_list()
for service_name in services:
try:
ros_type = rosservice.get_service_type(service_name)
snapshot['SERVICES'][service_name] = ros_type
snapshot['MESSAGE_DEFINITIONS'][ros_type] = rosmsg.get_srv_text(ros_type)
except rosservice.ROSServiceIOException as e:
rospy.logerr('Can\'t read service "{}": {}'.format(service_name, e))
# flatten nested message definitions
# TODO tests
def flatten_types():
changed = False
for type_name in snapshot['MESSAGE_DEFINITIONS'].keys():
fields = snapshot['MESSAGE_DEFINITIONS'][type_name].split('\n')
top_level_fields = []
subfields = []
for field in fields:
ros_type = strip_array_notation(field.split(' ')[0])
package, typename, _ = parse_ros_type(ros_type)
is_subfield = field.startswith(' ')
# make sure that empty messages will be listed in the snapshot too
if package and ros_type not in snapshot['MESSAGE_DEFINITIONS']:
snapshot['MESSAGE_DEFINITIONS'][ros_type] = ''
if is_subfield:
subfields.append(field[2:])
if subfields and (not is_subfield or field == fields[-1]):
# empty the subfields buffer into a separate message definition
sub_type_name = top_level_fields[-1].split(' ')[0]
sub_type_name = strip_array_notation(sub_type_name)
snapshot['MESSAGE_DEFINITIONS'][sub_type_name] = '\n'.join(subfields)
changed = True
subfields = []
if not is_subfield:
top_level_fields.append(field)
snapshot['MESSAGE_DEFINITIONS'][type_name] = '\n'.join(top_level_fields)
if (changed):
flatten_types()
flatten_types()
# Order the content of each section alphabetically
for section in snapshot._sections:
snapshot._sections[section] = collections.OrderedDict(sorted(snapshot._sections[section].items(), key=lambda t: t[0]))
# convert snapshot to string and save to file
with io.StringIO() as ss:
snapshot.write(ss)
ss.seek(0) # rewind
write_file(SNAPSHOT_PATH, ss.read())
###Output
_____no_output_____
###Markdown
Load
###Code
class RosSnapshot:
def __init__(self, path=None):
self.config = configparser.ConfigParser()
# enable case-sensitive keys (for the ROS types)
self.config.optionxform=str
if path:
self.config.read(path)
def get_message_definitions(self):
return self.config["MESSAGE_DEFINITIONS"]
def get_message_definition_packages(self):
ros_types = self.get_message_definitions().keys()
return set(map(lambda t: t.split('/')[0], ros_types))
def get_topics(self):
"""
Returns Map<topic, ros_type>
"""
return self.config["TOPICS"]
def get_services(self):
"""
Returns Map<service, ros_type>
"""
return self.config["SERVICES"]
def get_sections(self, ros_type):
"""
Returns the parts of a ROS message as a list. (one part for topics, two for services, three for actions)
"""
# ROS has these two "complex primitives". Here, we return their fields too so the rest of the code can handle them as regural message types
if ros_type == 'time':
return ['uint32 secs\nuint32 nsecs']
if ros_type == 'duration':
return ['int32 secs\nint32 nsecs']
if ros_type not in self.config["MESSAGE_DEFINITIONS"]:
raise KeyError("Can't find message definition for '{}'".format(ros_type))
return self.config["MESSAGE_DEFINITIONS"][ros_type].split('---')
def _get_all_fields(self, ros_type, section=0):
"""
Returns (ros_type, field_name)[]
"""
sections = self.get_sections(ros_type)
fields = sections[section].strip().split('\n')
fields = filter(lambda f: f != '', fields)
# skip constants
fields = map(lambda f: f.split(' '), fields)
return fields
def get_fields(self, ros_type, section=0):
"""
Returns (ros_type, field_name)[]
"""
fields = self._get_all_fields(ros_type, section)
fields = list(filter(lambda f: not '=' in f[1], fields))
return fields
def get_constants(self, ros_type, section=0):
"""
Returns (ros_type, field_name)[]
"""
fields = self._get_all_fields(ros_type, section)
fields = list(filter(lambda f: '=' in f[1], fields))
return fields
def __str__(self):
return '<RosSnapshot topics={} services={} message_definitions={}>'.format(
len(self.get_topics()),
len(self.get_services()),
len(self.get_message_definitions()))
snap = RosSnapshot(SNAPSHOT_PATH)
# Example ROS snapshot for unit tesing
TEST_SNAPSHOT_FILE = """
[MESSAGE_DEFINITIONS]
msgs/Foo = byte BIM=1
byte BAM=2
byte BUM=4
msgs2/Taz taz
time stamp
string name
uint32[] numbers
uint8[] image
msgs/Bar[5] bar
msgs/Bar = uint8 number
msgs2/Taz = duration[] tazz
srvs/Empty = ---
srvs/Baz = string logger
---
string level
[TOPICS]
/foo = msgs/Foo
[SERVICES]
/baz = srvs/Baz
"""
def tsnap() -> RosSnapshot:
"""
Returns a RosSnapshot instance filled with the test data
"""
snap = RosSnapshot()
snap.config.read_string(TEST_SNAPSHOT_FILE)
return snap
###Output
_____no_output_____
###Markdown
Proto
###Code
topic_service_template = """
service {service_name} {{
rpc Publish({pb_type}) returns (Empty);
rpc Subscribe(Empty) returns (stream {pb_type});
}}
"""
srv_service_template = """
service {service_name} {{
rpc Call({pb_type}Request) returns ({pb_type}Response);
}}
"""
header = '''
syntax = 'proto3';
package ros;
message Empty {}
message Time {
uint32 secs = 1;
uint32 nsecs = 2;
}
message Duration {
int32 secs = 1;
int32 nsecs = 2;
}
'''
def generate_pb_message_definition_for_package(snap: RosSnapshot, pkg):
"""Gerate the proto definitions of a ros package
>>> print(generate_pb_message_definition_for_package(tsnap(), "msgs"))
message msgs {
message Bar {
uint32 number = 1;
}
/**
* BIM=1
* BAM=2
* BUM=4
*/
message Foo {
msgs2.Taz taz = 1;
Time stamp = 2;
string name = 3;
repeated uint32 numbers = 4;
bytes image = 5;
repeated msgs.Bar bar = 6;
}
}
<BLANKLINE>
<BLANKLINE>
>>> print(generate_pb_message_definition_for_package(tsnap(), "srvs"))
message srvs {
message BazRequest {
string logger = 1;
}
message BazResponse {
string level = 1;
}
message EmptyRequest {
}
message EmptyResponse {
}
}
<BLANKLINE>
<BLANKLINE>
"""
proto = 'message %s {\n' % pkg
for ros_type in sorted(snap.get_message_definitions().keys()):
pkgname, msgname = ros_type.split('/')
msgname = strip_array_notation(msgname)
if pkg == pkgname:
section_count = len(snap.get_sections(ros_type))
is_msg = section_count == 1
is_srv = section_count == 2
for section in range(section_count):
postfix = ''
if is_srv:
postfix = 'Request' if section == 0 else 'Response'
fields = snap.get_fields(ros_type, section)
constants = snap.get_constants(ros_type, section)
if constants:
proto += ' /**\n'
for _, constant in constants:
proto += ' * {}\n'.format(constant)
proto += ' */\n'
proto += ' message %s {\n' % (msgname + postfix)
for key, (field_type, field_name) in enumerate(fields):
definition = '{} {} = {};'.format(
type_ros2pb(field_type), field_name, key+1)
proto += ' {}\n'.format(definition)
proto += ' }\n'
proto += '}\n\n'
return proto
def generate_pb_message_all_definitions(snap: RosSnapshot):
proto = ''
for pkg in sorted(snap.get_message_definition_packages()):
proto += generate_pb_message_definition_for_package(snap, pkg)
return proto
def generate_proto_file(snap: RosSnapshot):
print('Found {} messages'.format(len(snap.get_message_definitions())))
content = header
content += generate_pb_message_all_definitions(snap)
for (topic, ros_type) in sorted(snap.get_topics().items()):
content += topic_service_template.format(
service_name=grpc_service_name(topic),
pb_type=type_ros2pb(ros_type))
for (service, ros_type) in sorted(snap.get_services().items()):
content += srv_service_template.format(
service_name=grpc_service_name(service),
pb_type=type_ros2pb(ros_type))
write_file(os.path.join(PROTO_FILE), content)
doctest.testmod()
generate_proto_file(snap)
###Output
_____no_output_____
###Markdown
Server
###Code
def add_tab(lines, tabs=1):
""" Insert tabs at the beginning of each line """
return re.sub(r'^([^$])', ' ' * tabs + '\\1', lines, flags=re.MULTILINE)
def generate_msg_copier(snap: RosSnapshot, ros_type, left='pb_msg', right='ros_msg', new_instance=False, section=0):
""" Generate scripts for converison between protobuf and ROS messages
>>> print(generate_msg_copier(tsnap(), 'msgs/Bar', 'pb_msg', 'ros_msg', True))
pb_msg = ros_pb.msgs.Bar()
pb_msg.number = ros_msg.number
<BLANKLINE>
>>> print(generate_msg_copier(tsnap(), 'msgs/Bar', 'ros_msg', 'pb_msg', True))
ros_msg = roslib.message.get_message_class('msgs/Bar')()
ros_msg.number = pb_msg.number
<BLANKLINE>
>>> print(generate_msg_copier(tsnap(), 'srvs/Baz', 'ros_msg','pb_msg', section=0))
ros_msg.logger = pb_msg.logger
<BLANKLINE>
>>> print(generate_msg_copier(tsnap(), 'srvs/Baz', 'pb_msg','ros_msg', section=1))
pb_msg.level = ros_msg.level
<BLANKLINE>
>>> print(generate_msg_copier(tsnap(), 'msgs/Foo', 'ros_msg','pb_msg', True))
ros_msg = roslib.message.get_message_class('msgs/Foo')()
for pb_msg_ in pb_msg.taz.tazz:
ros_msg_ = ros_pb.duration()
ros_msg_.secs = pb_msg_.secs
ros_msg_.nsecs = pb_msg_.nsecs
ros_msg.taz.tazz.append(ros_msg_)
ros_msg.stamp.secs = pb_msg.stamp.secs
ros_msg.stamp.nsecs = pb_msg.stamp.nsecs
ros_msg.name = pb_msg.name
for pb_msg_ in pb_msg.numbers:
ros_msg.numbers.append(pb_msg_)
ros_msg.image = pb_msg.image
for pb_msg_ in pb_msg.bar:
ros_msg_ = roslib.message.get_message_class('msgs/Bar')()
ros_msg_.number = pb_msg_.number
ros_msg.bar.append(ros_msg_)
<BLANKLINE>
>>> print(generate_msg_copier(tsnap(), 'msgs2/Taz', 'pb_msg','ros_msg', True))
pb_msg = ros_pb.msgs2.Taz()
for ros_msg_ in ros_msg.tazz:
pb_msg_ = ros_pb.duration()
pb_msg_.secs = ros_msg_.secs
pb_msg_.nsecs = ros_msg_.nsecs
pb_msg.tazz.append(pb_msg_)
<BLANKLINE>
"""
result = ''
package, typename, _ = parse_ros_type(ros_type)
if new_instance:
if package:
if left.startswith('ros_'):
result += '{} = roslib.message.get_message_class(\'{}/{}\')()\n'.format(
left, package, typename)
else:
result += '{} = ros_pb.{}.{}()\n'.format(left, package, typename)
else:
result += '{} = ros_pb.{}()\n'.format(left, typename)
for ros_fieldtype, fieldname in snap.get_fields(ros_type, section):
package, typename, is_array = parse_ros_type(ros_fieldtype)
is_binary = is_binary_ros_type(package, typename, is_array)
ros_fieldtype = strip_array_notation(ros_fieldtype)
is_complex = package is not None # TODO is_scalar
is_time = package is None and (typename == 'time' or typename == 'duration')
if is_array and not is_binary:
sub_left = '{}_'.format(left.split('.')[0])
sub_right = '{}_'.format(right.split('.')[0])
result += 'for {sub_right} in {right}.{fieldname}:\n'.format(
sub_right=sub_right, right=right, fieldname=fieldname)
body = ''
if is_complex or is_time:
body += generate_msg_copier(
snap, ros_fieldtype, sub_left, sub_right, True)
body += '{left}.{fieldname}.append({sub_left})\n'.format(
left=left, sub_left=sub_left, fieldname=fieldname)
else:
body += '{left}.{fieldname}.append({sub_right})\n'.format(
left=left, sub_right=sub_right, fieldname=fieldname)
result += add_tab(body)
elif is_complex or is_time:
sub_left = '{}.{}'.format(left, fieldname)
sub_right = '{}.{}'.format(right, fieldname)
result += generate_msg_copier(snap, ros_fieldtype, sub_left, sub_right, False)
else:
result += '{left}.{fieldname} = {right}.{fieldname}\n'.format(
left=left, right=right, fieldname=fieldname)
return result
doctest.testmod()
frame_template = '''
#!/usr/bin/env python3
from concurrent import futures
import time
import math
import logging
import argparse
import sys
import threading
import time
import grpc
import rospy
import roslib.message
import ros_pb2 as ros_pb
import ros_pb2_grpc as ros_grpc
{classes}
def create_server():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
{add_servicers}
return server
def run_server():
address = rospy.get_param('address', '[::]:50051')
rospy.init_node('grpc_server', anonymous=True)
server = create_server()
server.add_insecure_port(address)
server.start()
print("gRPC server is running at %s" % address )
rospy.spin()
if __name__ == '__main__':
run_server()
'''
add_servicer_template = 'ros_grpc.add_{servicer_class}_to_server({servicer_class}(), server)'
topic_class_template = '''
class {servicer_class}(ros_grpc.{servicer_class}):
def __init__(self):
self.pub = None
self.Msg = roslib.message.get_message_class('{ros_type}')
def Publish(self, pb_msg, context):
if self.pub == None:
self.pub = rospy.Publisher('{topic}', self.Msg, queue_size=10)
ros_msg = self.Msg()
{copy_pb2ros}
self.pub.publish(ros_msg)
return ros_pb.Empty()
def Subscribe(self, request, context):
c = {{'unsubscribed': False}}
ros_messages = []
def callback(ros_msg):
ros_messages.append(ros_msg)
subscription = rospy.Subscriber('{topic}', self.Msg, callback)
def on_rpc_done():
c['unsubscribed'] = True
print("Attempting to regain servicer thread...", c)
subscription.unregister()
context.add_callback(on_rpc_done)
while not c['unsubscribed']:
while ros_messages:
ros_msg = ros_messages.pop(0)
{copy_ros2pb}
yield pb_msg
rospy.sleep(0.01)
'''
service_class_template = """
class {servicer_class}(ros_grpc.{servicer_class}):
def Call(self, pb_msg, context):
Srv = roslib.message.get_service_class('{ros_type}')
rospy.wait_for_service('{service}')
call = rospy.ServiceProxy('{service}', Srv)
ros_msg = Srv._request_class()
{copy_pb2ros}
ros_msg = call(ros_msg)
{new_pb_response}
{copy_ros2pb}
return pb_msg
"""
def generate_server(snap: RosSnapshot):
add_servicers = []
servicer_classes = []
for topic, ros_type in sorted(snap.get_topics().items()):
servicer_class = grpc_service_name(topic) + 'Servicer'
add_servicers.append(add_servicer_template.format(
servicer_class=servicer_class))
copy_ros2pb = generate_msg_copier(snap, ros_type, 'pb_msg', 'ros_msg', new_instance=True)
copy_pb2ros = generate_msg_copier(snap, ros_type, 'ros_msg', 'pb_msg')
copy_ros2pb = add_tab(copy_ros2pb, 4)
copy_pb2ros = add_tab(copy_pb2ros, 2)
servicer_classes.append(topic_class_template.format(
servicer_class=servicer_class, ros_type=ros_type, topic=topic, copy_ros2pb=copy_ros2pb, copy_pb2ros=copy_pb2ros))
for service, ros_type in sorted(snap.get_services().items()):
servicer_class = grpc_service_name(service) + 'Servicer'
add_servicers.append(add_servicer_template.format(
servicer_class=servicer_class))
copy_ros2pb = generate_msg_copier(snap, ros_type, 'pb_msg', 'ros_msg', section=1)
copy_pb2ros = generate_msg_copier(snap, ros_type, 'ros_msg', 'pb_msg', section=0)
copy_ros2pb = add_tab(copy_ros2pb, 2)
copy_pb2ros = add_tab(copy_pb2ros, 2)
package, typename, _ = parse_ros_type(ros_type)
new_pb_response = 'pb_msg = ros_pb.{}.{}Response()\n'.format(package, typename)
servicer_classes.append(service_class_template.format(
servicer_class=servicer_class, ros_type=ros_type, service=service, copy_ros2pb=copy_ros2pb, copy_pb2ros=copy_pb2ros, new_pb_response=new_pb_response))
return frame_template.format(add_servicers=add_tab('\n'.join(add_servicers)), classes='\n'.join(servicer_classes))
write_file(os.path.join(PKG_SRC_PATH, '{}.py'.format(PKG_NAME)),
generate_server(snap))
print('grpc_server.py file generated')
!python3 -m grpc_tools.protoc \
-I={os.path.relpath(PKG_PATH)} \
--python_out={os.path.relpath(PKG_SRC_PATH)} \
--grpc_python_out={os.path.relpath(PKG_SRC_PATH)} \
{os.path.relpath(PROTO_FILE)}
# Check if the doctests are ok
doctest.testmod()
###Output
_____no_output_____ |
vfa_blog/VariableFlipAngle.ipynb | ###Markdown
Welcome to a qMRLab interactive blog post Jupyter Notebook!If this is your first time running a Juptyer Notebook, there's a lot of tutorials available online to help. [Here's one](https://www.dataquest.io/blog/jupyter-notebook-tutorial/) for your convenience. IntroductionThis notebook contains everything needed to reproduce the Variable Flip Angle T1 blog post on the [qMRLab website](). In fact, this notebook generated the HTML for the blog post too! This notebook is currently running on a MyBinder server that only you can access, but if you want to be kept up-to-date on any changes that the developpers make to this notebook, you should go to it's [GitHub repository](https://github.com/qMRLab/t1_notebooks) and follow it by clicking the "Watch" button in the top right (you may need to create a GitHub account, if you don't have one already). TipsHere's a few things you can do in this notebook Code* Run the entire processing by clicking above on the "Kernel" tab, then "Restart & Run All". It will be complete when none of the cells have an asterix "\*" in the square brackets.* To change the code, you need to click once on code cells. To re-run that cell, click the "Run" button above when the cell is selected. * **Note:** Cells can depend on previous cells, or even on previous runs of the cell itself, so it's best to run all the previous cells beforehand.* This binder runs on SoS, which allows the mixing of Octave (i.e. an open-source MATLAB) and Python cells. Take a look a the drop down menu on the top right of the cells to know which one you are running.* To transfer data from cells of one language to another, you need to create a new cell in the incoming language and run `%get (param name) --from (outgoing language)`. See cells below for several examples within this notebook. HTML* To reproduce the HTML of the blog post, run the entire processing pipeline (see point one in the previous section), then save the notebook (save icon, top left). Now, click on the drop down menu on the left pannel, and select `%sossave --to html --force` . After a few seconds, it should output "Workflow saved to VariableFlipAngle.html" – click on the HTML name, and you're done!* Cells with tags called "scratch" are not displayed in the generated HTML.* Cells with the tag "report_output" display the output (e.g. figures) in the generated HTML.* Currently in an un-run notebook, the HTML is not formatted like the website. To do so, run the Python module import cell (` Module imports`) and then very last cell (`display(HTML(...)`).**If you have any other questions or comments, please raise them in a [GitHub issue](https://github.com/qMRLab/t1_notebooks/issues).** NoteThe following cell is meant to be displayed for instructional purposes in the blog post HTML when "All cells" gets displayed (i.e. the Octave code).
###Code
% **Blog post code introduction**
%
% Congrats on activating the "All cells" option in this interactive blog post =D
%
% Below, several new HTML blocks have appears prior to the figures, displaying the Octave/MATLAB code that was used to generate the figures in this blog post.
%
% If you want to reproduce the data on your own local computer, you simply need to have qMRLab installed in your Octave/MATLAB path and run the "startup.m" file, as is shown below.
%
% If you want to get under the hood and modify the code right now, you can do so in the Jupyter Notebook of this blog post hosted on MyBinder. The link to it is in the introduction above.
# PYTHON CODE
# Module imports
import matplotlib.pyplot as plt
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
config={'showLink': False, 'displayModeBar': False}
init_notebook_mode(connected=True)
from IPython.core.display import display, HTML
###Output
_____no_output_____
###Markdown
Variable Flip Angle T1 Mapping Variable flip angle (VFA) T1 mapping (Christensen et al. 1974; Gupta 1977; Fram et al. 1987), also known as Driven Equilibrium Single Pulse Observation of T1 (DESPOT1) (Homer & Beevers 1985; Deoni et al. 2003), is a rapid quantitative T1 measurement technique that is widely used to acquire 3D T1 maps (e.g. whole-brain) in a clinically feasible time. VFA estimates T1 values by acquiring multiple spoiled gradient echo acquisitions, each with different excitation flip angles (θn for n = 1, 2, .., N and θi ≠ θj). The steady-state signal of this pulse sequence (Figure 1) uses very short TRs (on the order of magnitude of 10 ms) and is very sensitive to T1 for a wide range of flip angles.VFA is a technique that originates from the NMR field, and was adopted because of its time efficiency and the ability to acquire accurate T1 values simultaneously for a wide range of values (Christensen et al. 1974; Gupta 1977). For imaging applications, VFA also benefits from an increase in SNR because it can be acquired using a 3D acquisition instead of multislice, which also helps to reduce slice profile effects. One important drawback of VFA for T1 mapping is that the signal is very sensitive to inaccuracies in the flip angle value, thus impacting the T1 estimates. In practice, the nominal flip angle (i.e. the value set at the scanner) is different than the actual flip angle experienced by the spins (e.g. at 3.0 T, variations of up to ±30%), an issue that increases with field strength. VFA typically requires the acquisition of another quantitative map, the transmit RF amplitude (B1+, or B1 for short), to calibrate the nominal flip angle to its actual value because of B1 inhomogeneities that occur in most loaded MRI coils (Sled & Pike 1998). The need to acquire an additional B1 map reduces the time savings offered by VFA over saturation-recovery techniques, and inaccuracies/imprecisions of the B1 map are also propagated into the VFA T1 map (Boudreau et al. 2017; Lee et al. 2017). Figure 1. Simplified pulse sequence diagram of a variable flip angle (VFA) pulse sequence with a gradient echo readout. TR: repetition time, θn: excitation flip angle for the nth measurement, IMG: image acquisition (k-space readout), SPOIL: spoiler gradient. Signal Modelling The steady-state longitudinal magnetization of an ideal variable flip angle experiment can be analytically solved from the Bloch equations for the spoiled gradient echo pulse sequence {θn–TR}:where Mz is the longitudinal magnetization, M0 is the magnetization at thermal equilibrium, TR is the pulse sequence repetition time (Figure 1), and θn is the excitation flip angle. The Mz curves of different T1 values for a range of θn and TR values are shown in Figure 2. Figure 2. Variable flip angle technique signal curves (Eq. 1) for three different T1 values, approximating the main types of tissue in the brain at 3T.
###Code
%% MATLAB/OCTAVE CODE
% Adds qMRLab to the path of the environment
cd ../qMRLab
startup
%% MATLAB/OCTAVE CODE
% Code used to generate the data required for Figure 4 of the blog post
clear all
%% Setup parameters
% All times are in milliseconds
% All flip angles are in degrees
TR_range = 5:5:200;
params.EXC_FA = 1:90;
%% Calculate signals
%
% To see all the options available, run `help vfa_t1.analytical_solution`
for ii = 1:length(TR_range)
params.TR = TR_range(ii);
% White matter
params.T1 = 900; % in milliseconds
signal_WM(ii,:) = vfa_t1.analytical_solution(params);
% Grey matter
params.T1 = 1500; % in milliseconds
signal_GM(ii,:) = vfa_t1.analytical_solution(params);
% CSF
params.T1 = 4000; % in milliseconds
signal_CSF(ii,:) = vfa_t1.analytical_solution(params);
end
%get params --from Octave
%get TR_range --from Octave
%get signal_WM --from Octave
%get signal_GM --from Octave
%get signal_CSF --from Octave
# PYTHON CODE
init_notebook_mode(connected=True)
data1 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_WM[ii]))),
name = 'T<sub>1</sub> = 0.9 s (White Matter)',
text = 'T<sub>1</sub> = 0.9 s (White Matter)',
hoverinfo = 'x+y+text') for ii in range(len(TR_range))]
data1[4]['visible'] = True
data2 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_GM[ii]))),
name = 'T<sub>1</sub> = 1.5 s (Grey Matter)',
text = 'T<sub>1</sub> = 1.5 s (Grey Matter)',
hoverinfo = 'x+y+text') for ii in range(len(TR_range))]
data2[4]['visible'] = True
data3 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_CSF[ii]))),
name = 'T<sub>1</sub> = 4.0 s (Cerebrospinal Fluid)',
text = 'T<sub>1</sub> = 4.0 s (Cerebrospinal Fluid)',
hoverinfo = 'x+y+text') for ii in range(len(TR_range))]
data3[4]['visible'] = True
data = data1 + data2 + data3
steps = []
for i in range(len(TR_range)):
step = dict(
method = 'restyle',
args = ['visible', [False] * len(data1)],
label = str(TR_range[i])
)
step['args'][1][i] = True # Toggle i'th trace to "visible"
steps.append(step)
sliders = [dict(
x = 0,
y = -0.02,
active = 2,
currentvalue = {"prefix": "TR value (ms): <b>"},
pad = {"t": 50, "b": 10},
steps = steps
)]
layout = go.Layout(
width=580,
height=450,
margin=go.layout.Margin(
l=80,
r=40,
b=60,
t=10,
),
annotations=[
dict(
x=0.5004254919715793,
y=-0.18,
showarrow=False,
text='Excitation Flip Angle (°)',
font=dict(
family='Times New Roman',
size=22
),
xref='paper',
yref='paper'
),
dict(
x=-0.15,
y=0.5,
showarrow=False,
text='Long. Magnetization (M<sub>z</sub>)',
font=dict(
family='Times New Roman',
size=22
),
textangle=-90,
xref='paper',
yref='paper'
),
],
xaxis=dict(
autorange=False,
range=[0, params['EXC_FA'][-1]],
showgrid=False,
linecolor='black',
linewidth=2
),
yaxis=dict(
autorange=True,
showgrid=False,
linecolor='black',
linewidth=2
),
legend=dict(
x=0.5,
y=0.9,
traceorder='normal',
font=dict(
family='Times New Roman',
size=12,
color='#000'
),
bordercolor='#000000',
borderwidth=2
),
sliders=sliders
)
fig = dict(data=data, layout=layout)
iplot(fig, filename = 'basic-line', config = config)
###Output
_____no_output_____
###Markdown
From Figure 2, it is clearly seen that the flip angle at which the steady-state signal is maximized is dependent on the T1 and TR values. This flip angle is a well known quantity, called the Ernst angle (Ernst & Anderson 1966), which can be solved analytically from Equation 1 using properties of calculus:The closed-form solution (Equation 1) makes several assumptions which in practice may not always hold true if care is not taken. Mainly, it is assumed that the longitudinal magnetization has reached a steady state after a large number of TRs, and that the transverse magnetization is perfectly spoiled at the end of each TR. Bloch simulations – a numerical approach at solving the Bloch equations for a set of spins at each time point – provide a more realistic estimate of the signal if the number of repetition times is small (i.e. a steady-state is not achieved). As can be seen from Figure 3, the number of repetitions required to reach a steady state not only depends on T1, but also on the flip angle; flip angles near the Ernst angle need more TRs to reach a steady state. Preparation pulses or an outward-in k-space acquisition pattern are typically sufficient to reach a steady state by the time that the center of k-space is acquired, which is where most of the image contrast resides. Figure 3. Signal curves simulated using Bloch simulations (orange) for a number of repetitions ranging from 1 to 150, plotted against the ideal case (Equation 1 – blue). Simulation details: TR = 25 ms, T1 = 900 ms, 100 spins. Ideal spoiling was used for this set of Bloch simulations (transverse magnetization was set to 0 at the end of each TR).
###Code
%% MATLAB/OCTAVE CODE
% Code used to generate the data required for Figure 4 of the blog post
clear all
%% Setup parameters
% All times are in milliseconds
% All flip angles are in degrees
% White matter
params.T1 = 900; % in milliseconds
params.T2 = 10000;
params.TR = 25;
params.TE = 5;
params.EXC_FA = 1:90;
Nex_range = 1:1:150;
%% Calculate signals
%
% To see all the options available, run `help vfa_t1.analytical_solution`
for ii = 1:length(Nex_range)
params.Nex = Nex_range(ii);
signal_analytical(ii,:) = vfa_t1.analytical_solution(params);
[~, complex_signal] = vfa_t1.bloch_sim(params);
signal_blochsim(ii,:) = abs(complex(complex_signal));
end
%get params --from Octave
%get Nex_range --from Octave
%get signal_analytical --from Octave
%get signal_blochsim --from Octave
# PYTHON CODE
init_notebook_mode(connected=True)
data1 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_analytical[ii]))),
name = 'Analytical Solution',
text = 'Analytical Solution',
hoverinfo = 'x+y+text') for ii in range(len(Nex_range))]
data1[49]['visible'] = True
data2 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_blochsim[ii]))),
name = 'Bloch Simulation',
text = 'Bloch Simulation',
hoverinfo = 'x+y+text') for ii in range(len(Nex_range))]
data2[49]['visible'] = True
data = data1 + data2
steps = []
for i in range(len(Nex_range)):
step = dict(
method = 'restyle',
args = ['visible', [False] * len(data1)],
label = str(Nex_range[i])
)
step['args'][1][i] = True # Toggle i'th trace to "visible"
steps.append(step)
sliders = [dict(
x = 0,
y = -0.02,
active = 49,
currentvalue = {"prefix": "n<sup>th</sup> TR: <b>"},
pad = {"t": 50, "b": 10},
steps = steps
)]
layout = go.Layout(
width=580,
height=450,
margin=go.layout.Margin(
l=80,
r=40,
b=60,
t=10,
),
annotations=[
dict(
x=0.5004254919715793,
y=-0.18,
showarrow=False,
text='Excitation Flip Angle (°)',
font=dict(
family='Times New Roman',
size=22
),
xref='paper',
yref='paper'
),
dict(
x=-0.15,
y=0.5,
showarrow=False,
text='Signal',
font=dict(
family='Times New Roman',
size=22
),
textangle=-90,
xref='paper',
yref='paper'
),
],
xaxis=dict(
autorange=False,
range=[0, params['EXC_FA'][-1]],
showgrid=False,
linecolor='black',
linewidth=2
),
yaxis=dict(
autorange=True,
showgrid=False,
linecolor='black',
linewidth=2
),
legend=dict(
x=0.5,
y=0.9,
traceorder='normal',
font=dict(
family='Times New Roman',
size=12,
color='#000'
),
bordercolor='#000000',
borderwidth=2
),
sliders=sliders
)
fig = dict(data=data, layout=layout)
iplot(fig, filename = 'basic-line', config = config)
###Output
_____no_output_____
###Markdown
Sufficient spoiling is likely the most challenging parameter to control for in a VFA experiment. A combination of both gradient spoiling and RF phase spoiling (Zur et al. 1991; Bernstein et al. 2004) are typically recommended (Figure 4). It has also been shown that the use of very strong gradients, introduces diffusion effects (not considered in Figure 4), further improving the spoiling efficacy in the VFA pulse sequence (Yarnykh 2010). Figure 4. Signal curves estimated using Bloch simulations for three categories of signal spoiling: (1) ideal spoiling (blue), gradient & RF Spoiling (orange), and no spoiling (green). Simulations details: TR = 25 ms, T1 = 900 ms, Te = 100 ms, TE = 5 ms, 100 spins. For the ideal spoiling case, the transverse magnetization is set to zero at the end of each TR. For the gradient & RF spoiling case, each spin is rotated by different increments of phase (2𝜋 / of spins) to simulate complete decoherence from gradient spoiling, and the RF phase of the excitation pulse is ɸn = ɸn-1 + nɸ0 = ½ ɸ0(n2 + n + 2) (Bernstein et al. 2004) with ɸ0 = 117° (Zur et al. 1991) after each TR.
###Code
%% MATLAB/OCTAVE CODE
% Code used to generate the data required for Figure 4 of the blog post
clear all
%% Setup parameters
% All times are in milliseconds
% All flip angles are in degrees
% White matter
params.T1 = 900; % in milliseconds
params.T2 = 100;
params.TR = 25;
params.TE = 5;
params.EXC_FA = 1:90;
Nex_range = [1:9, 10:10:100];
%% Calculate signals
%
% To see all the options available, run `help vfa_t1.analytical_solution`
for ii = 1:length(Nex_range)
params.Nex = Nex_range(ii);
params.crushFlag = 1;
[~, complex_signal] = vfa_t1.bloch_sim(params);
signal_ideal_spoil(ii,:) = abs(complex_signal);
params.inc = 117;
params.partialDephasing = 1;
params.partialDephasingFlag = 1;
params.crushFlag = 0;
[~, complex_signal] = vfa_t1.bloch_sim(params);
signal_optimal_crush_and_rf_spoil(ii,:) = abs(complex_signal);
params.inc = 0;
params.partialDephasing = 0;
[~, complex_signal] = vfa_t1.bloch_sim(params);
signal_no_gradient_and_rf_spoil(ii,:) = abs(complex_signal);
end
%get params --from Octave
%get Nex_range --from Octave
%get signal_ideal_spoil --from Octave
%get signal_optimal_crush_and_rf_spoil --from Octave
%get signal_no_gradient_and_rf_spoil --from Octave
# PYTHON CODE
init_notebook_mode(connected=True)
data1 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_ideal_spoil[ii]))),
name = 'Ideal Spoiling',
text = 'Ideal Spoiling',
hoverinfo = 'x+y+text') for ii in range(len(Nex_range))]
data1[10]['visible'] = True
data2 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_optimal_crush_and_rf_spoil[ii]))),
name = 'Gradient & RF Spoiling',
text = 'Gradient & RF Spoiling',
hoverinfo = 'x+y+text') for ii in range(len(Nex_range))]
data2[10]['visible'] = True
data3 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_no_gradient_and_rf_spoil[ii]))),
name = 'No Spoiling',
text = 'No Spoiling',
hoverinfo = 'x+y+text') for ii in range(len(Nex_range))]
data3[10]['visible'] = True
data = data1 + data2+ data3
steps = []
for i in range(len(Nex_range)):
step = dict(
method = 'restyle',
args = ['visible', [False] * len(data1)],
label = str(Nex_range[i])
)
step['args'][1][i] = True # Toggle i'th trace to "visible"
steps.append(step)
sliders = [dict(
x = 0,
y = -0.02,
active = 10,
currentvalue = {"prefix": "n<sup>th</sup> TR: <b>"},
pad = {"t": 50, "b": 10},
steps = steps
)]
layout = go.Layout(
width=580,
height=450,
margin=go.layout.Margin(
l=80,
r=40,
b=60,
t=10,
),
annotations=[
dict(
x=0.5004254919715793,
y=-0.18,
showarrow=False,
text='Excitation Flip Angle (°)',
font=dict(
family='Times New Roman',
size=22
),
xref='paper',
yref='paper'
),
dict(
x=-0.15,
y=0.5,
showarrow=False,
text='Signal',
font=dict(
family='Times New Roman',
size=22
),
textangle=-90,
xref='paper',
yref='paper'
),
],
xaxis=dict(
autorange=False,
range=[0, params['EXC_FA'][-1]],
showgrid=False,
linecolor='black',
linewidth=2
),
yaxis=dict(
autorange=True,
showgrid=False,
linecolor='black',
linewidth=2
),
legend=dict(
x=0.5,
y=0.9,
traceorder='normal',
font=dict(
family='Times New Roman',
size=12,
color='#000'
),
bordercolor='#000000',
borderwidth=2
),
sliders=sliders
)
fig = dict(data=data, layout=layout)
iplot(fig, filename = 'basic-line', config = config)
###Output
_____no_output_____
###Markdown
Data Fitting At first glance, one could be tempted to fit VFA data using a non-linear least squares fitting algorithm such as Levenberg-Marquardt with Eq. 1, which typically only has two free fitting variables (T1 and M0). Although this is a valid way of estimating T1 from VFA data, it is rarely done in practice because a simple refactoring of Equation 1 allows T1 values to be estimated with a linear least square fitting algorithm, which substantially reduces the processing time. Without any approximations, Equation 1 can be rearranged into the form y = mx+b (Gupta 1977):As the third term does not change between measurements (it is constant for each θn), it can be grouped into the constant for a simpler representation:With this rearranged form of Equation 1, T1 can be simply estimated from the slope of a linear regression calculated from Sn/sin(θn) and Sn/tan(θn) values:If data were acquired using only two flip angles – a very common VFA acquisition protocol – then the slope can be calculated using the elementary slope equation. Figure 5 displays both Equation 1 and 4 plotted for a noisy dataset. Figure 5. Mean and standard deviation of the VFA signal plotted using the nonlinear form (Equation 1 – blue) and linear form (Equation 4 – red). Monte Carlo simulation details: SNR = 25, N = 1000. VFA simulation details: TR = 25 ms, T1 = 900 ms.
###Code
%% MATLAB/OCTAVE CODE
% Code used to generate the data required for Figure 4 of the blog post
clear all
%% Setup parameters
% All times are in milliseconds
% All flip angles are in degrees
params.EXC_FA = [1:4,5:5:90];
%% Calculate signals
%
% To see all the options available, run `help vfa_t1.analytical_solution`
params.TR = 0.025;
params.EXC_FA = [2:9,10:5:90];
% White matter
x.M0 = 1;
x.T1 = 0.900; % in milliseconds
Model = vfa_t1;
Opt.SNR = 25;
Opt.TR = params.TR;
Opt.T1 = x.T1;
clear Model.Prot.VFAData.Mat(:,1)
Model.Prot.VFAData.Mat = zeros(length(params.EXC_FA),2);
Model.Prot.VFAData.Mat(:,1) = params.EXC_FA';
Model.Prot.VFAData.Mat(:,2) = Opt.TR;
for jj = 1:1000
[FitResult{jj}, noisyData{jj}] = Model.Sim_Single_Voxel_Curve(x,Opt,0);
fittedT1(jj) = FitResult{jj}.T1;
noisyData_array(jj,:) = noisyData{jj}.VFAData;
noisyData_array_div_sin(jj,:) = noisyData_array(jj,:) ./ sind(Model.Prot.VFAData.Mat(:,1))';
noisyData_array_div_tan(jj,:) = noisyData_array(jj,:) ./ tand(Model.Prot.VFAData.Mat(:,1))';
end
for kk=1:length(params.EXC_FA)
data_mean(kk) = mean(noisyData_array(:,kk));
data_std(kk) = std(noisyData_array(:,kk));
data_mean_div_sin(kk) = mean(noisyData_array_div_sin(:,kk));
data_std_div_sin(kk) = std(noisyData_array_div_sin(:,kk));
data_mean_div_tan(kk) = mean(noisyData_array_div_tan(:,kk));
data_std_div_tan(kk) = std(noisyData_array_div_tan(:,kk));
end
%% Setup parameters
% All times are in milliseconds
% All flip angles are in degrees
params_highres.EXC_FA = [2:1:90];
%% Calculate signals
%
% To see all the options available, run `help vfa_t1.analytical_solution`
params_highres.TR = params.TR * 1000; % in milliseconds
% White matter
params_highres.T1 = x.T1*1000; % in milliseconds
signal_WM = vfa_t1.analytical_solution(params_highres);
signal_WM_div_sin = signal_WM ./ sind(params_highres.EXC_FA);
signal_WM_div_tan = signal_WM ./ tand(params_highres.EXC_FA);
%get params --from Octave
%get data_mean --from Octave
%get data_mean_div_sin --from Octave
%get data_mean_div_tan --from Octave
%get data_std --from Octave
%get data_std_div_sin --from Octave
%get data_std_div_tan --from Octave
%get params_highres --from Octave
%get signal_WM --from Octave
%get signal_WM_div_sin --from Octave
%get signal_WM_div_tan --from Octave
# PYTHON CODE
init_notebook_mode(connected=True)
data1 = dict(
visible = True,
x = params_highres["EXC_FA"],
y = signal_WM,
name = 'Analytical Solutions',
text = params["EXC_FA"],
mode = 'lines',
line = dict(
color = ('rgb(0, 0, 0)'),
dash = 'dot'),
hoverinfo='none')
data2 = dict(
visible = True,
x = signal_WM_div_tan,
y = signal_WM_div_sin,
name = 'Analytical Solutions',
text = params_highres["EXC_FA"],
mode = 'lines',
xaxis='x2',
yaxis='y2',
line = dict(
color = ('rgb(0, 0, 0)'),
dash = 'dot'
),
hoverinfo='none',
showlegend=False)
data3 = dict(
visible = True,
x = params["EXC_FA"],
y = data_mean,
name = 'Nonlinear Form - Noisy',
text = ["Flip angle: " + str(x) + "°" for x in params["EXC_FA"]],
mode = 'markers',
hoverinfo = 'y+text',
line = dict(
color = ('rgb(22, 96, 167)'),
),
error_y=dict(
type='data',
array=data_std,
visible=True,
color = ('rgb(142, 192, 240)')
))
data4 = dict(
visible = True,
x = data_mean_div_tan,
y = data_mean_div_sin,
name = 'Linear Form - Noisy',
text = ["Flip angle: " + str(x) + "°" for x in params["EXC_FA"]],
mode = 'markers',
xaxis='x2',
yaxis='y2',
hoverinfo = 'x+y+text',
line = dict(
color = ('rgb(205, 12, 24)'),
),
error_x=dict(
type='data',
array=data_std_div_tan,
visible=True,
color = ('rgb(248, 135, 142)')
),
error_y=dict(
type='data',
array=data_std_div_sin,
visible=True,
color = ('rgb(248, 135, 142)')
))
data = [data1, data2, data3, data4]
layout = go.Layout(
width=580,
height=450,
margin=go.layout.Margin(
l=80,
r=80,
b=60,
t=60,
),
annotations=[
dict(
x=0.5004254919715793,
y=-0.14,
showarrow=False,
text='Excitation Flip Angle (<i>θ<sub>n</sub></i>)',
font=dict(
family='Times New Roman',
size=22,
color=('rgb(21, 91, 158)')
),
xref='paper',
yref='paper'
),
dict(
x=-0.17,
y=0.5,
showarrow=False,
text='Signal (<i>S<sub>n</sub></i>)',
font=dict(
family='Times New Roman',
size=22,
color=('rgb(21, 91, 158)')
),
textangle=-90,
xref='paper',
yref='paper'
),
dict(
x=0.5004254919715793,
y=1.15,
showarrow=False,
text='<i>S<sub>n</sub></i> / tan(<i>θ<sub>n</sub></i>)',
font=dict(
family='Times New Roman',
size=22,
color=('rgb(169, 10, 20)')
),
xref='paper',
yref='paper'
),
dict(
x=1.16,
y=0.5,
showarrow=False,
text='<i>S<sub>n</sub></i> / sin(<i>θ<sub>n</sub></i>)',
font=dict(
family='Times New Roman',
size=22,
color=('rgb(169, 10, 20)')
),
xref='paper',
yref='paper',
textangle=-90,
),
],
xaxis=dict(
autorange=False,
range=[params['EXC_FA'][0], params['EXC_FA'][-1]],
showgrid=False,
linecolor='black',
linewidth=2
),
yaxis=dict(
autorange=True,
showgrid=False,
linecolor='black',
linewidth=2
),
xaxis2=dict(
autorange=False,
range=[0, 1],
showgrid=False,
mirror=True,
overlaying= 'x',
anchor= 'y2',
side= 'top',
linecolor='black',
linewidth=2
),
yaxis2=dict(
autorange=False,
range=[0, 1],
showgrid=False,
overlaying= 'y',
anchor= 'x',
side= 'right',
linecolor='black',
linewidth=2
),
legend=dict(
x=0.32,
y=0.98,
traceorder='normal',
font=dict(
family='Times New Roman',
size=12,
color='#000'
),
bordercolor='#000000',
borderwidth=2
),
)
fig = dict(data=data, layout=layout)
iplot(fig, filename = 'basic-line', config = config)
###Output
_____no_output_____
###Markdown
There are two important imaging protocol design considerations that should be taken into account when planning to use VFA: (1) how many and which flip angles to use to acquire VFA data, and (2) correcting inaccurate flip angles due to transmit RF field inhomogeneity. Most VFA experiments use the minimum number of required flip angles (two) to minimize acquisition time. For this case, it has been shown that the flip angle choice resulting in the best precision for VFA T1 estimates for a sample with a single T1 value (i.e. single tissue) are the two flip angles that result in 71% of the maximum possible steady-state signal (i.e. at the Ernst angle) (Deoni et al. 2003; Schabel & Morrell 2009).Time allowing, additional flip angles are often acquired at higher values and in between the two above, because greater signal differences between tissue T1 values are present there (e.g. Figure 2). Also, for more than two flip angles, Equations 1 and 4 do not have the same noise weighting for each fitting point, which may bias linear least-square T1 estimates at lower SNRs. Thus, it has been recommended that low SNR data should be fitted with either Equation 1 using non-linear least-squares (slower fitting) or with a weighted linear least-squares form of Equation 4 (Chang et al. 2008).Accurate knowledge of the flip angle values is very important to produce accurate T1 maps. Because of how the RF field interacts with matter (Sled & Pike 1998), the excitation RF field (B1+, or B1 for short) of a loaded RF coil results in spatial variations in intensity/amplitude, unless RF shimming is available to counteract this effect (not common at clinical field strengths). For quantitative measurements like VFA which are sensitive to this parameter, the flip angle can be corrected (voxelwise) relative to the nominal value by multiplying it with a scaling factor (B1) from a B1 map that is acquired during the same session:B1 in this context is normalized, meaning that it is unitless and has a value of 1 in voxels where the RF field has the expected amplitude (i.e. where the nominal flip angle is the actual flip angle). Figure 6 displays fitted VFA T1 values from a Monte Carlo dataset simulated using biased flip angle values, and fitted without/with B1 correction. Figure 6. Mean and standard deviations of fitted VFA T1 values for a set of Monte Carlo simulations (SNR = 100, N = 1000), simulated using a wide range of biased flip angles and fitted without (blue) or with (red) B1 correction. Simulation parameters: TR = 25 ms, T1 = 900 ms, θnominal = 6° and 32° (optimized values for this TR/T1 combination). Notice how even after B1 correction, fitted T1 values at B1 values far from the nominal case (B1 = 1) exhibit larger variance, as the actual flip angles of the simulated signal deviate from the optimal values for this TR/T1 (Deoni et al. 2003).
###Code
%% MATLAB/OCTAVE CODE
% Code used to generate the data required for Figure 4 of the blog post
clear all
%% Setup parameters
% All times are in seconds
% All flip angles are in degrees
params.TR = 0.025; % in seconds
% White matter
params.T1 = 0.900; % in seconds
% Calculate optimal flip angles for a two flip angle VFA experiment (for this T1 and TR)
% The will be the nominal flip angles (the flip angles assumed by the "user", before a
% "realistic"B1 bias is applied)
nominal_EXC_FA = vfa_t1.find_two_optimal_flip_angles(params); % in degrees
disp('Nominal flip angles:')
disp(nominal_EXC_FA)
% Range of B1 values biasing the excitation flip angle away from their nominal values
B1Range = 0.1:0.1:2;
x.M0 = 1;
x.T1 = params.T1; % in seconds
Model = vfa_t1;
Model.voxelwise = 1;
Opt.SNR = 100;
Opt.TR = params.TR;
Opt.T1 = x.T1;
% Monte Carlo signal simulations
for ii = 1:1000
for jj = 1:length(B1Range)
B1 = B1Range(jj);
actual_EXC_FA = B1 * nominal_EXC_FA;
params.EXC_FA = actual_EXC_FA;
clear Model.Prot.VFAData.Mat(:,1)
Model.Prot.VFAData.Mat = zeros(length(params.EXC_FA),2);
Model.Prot.VFAData.Mat(:,1) = params.EXC_FA';
Model.Prot.VFAData.Mat(:,2) = Opt.TR;
[FitResult{ii,jj}, noisyData{ii,jj}] = Model.Sim_Single_Voxel_Curve(x,Opt,0);
noisyData_array(ii,jj,:) = noisyData{ii,jj}.VFAData;
end
end
%
Model = vfa_t1;
Model.voxelwise = 1;
FlipAngle = nominal_EXC_FA';
TR = params.TR .* ones(size(FlipAngle));
Model.Prot.VFAData.Mat = [FlipAngle TR];
data.VFAData(:,:,1,1) = noisyData_array(:,:,1);
data.VFAData(:,:,1,2) = noisyData_array(:,:,2);
data.Mask = repmat(ones(size(B1Range)),[size(noisyData_array,1),1]);
data.B1map = repmat(ones(size(B1Range)),[size(noisyData_array,1),1]);
FitResults_noB1Correction = FitData(data,Model,0);
data.B1map = repmat(B1Range,[size(noisyData_array,1),1]);
FitResults_withB1Correction = FitData(data,Model,0);
%%
%
mean_T1_noB1Correction = mean(FitResults_noB1Correction.T1);
mean_T1_withB1Correction = mean(FitResults_withB1Correction.T1);
std_T1_noB1Correction = std(FitResults_noB1Correction.T1);
std_T1_withB1Correction = std(FitResults_withB1Correction.T1);
%get B1Range --from Octave
%get mean_T1_noB1Correction --from Octave
%get mean_T1_withB1Correction --from Octave
%get std_T1_noB1Correction --from Octave
%get std_T1_withB1Correction --from Octave
# PYTHON CODE
init_notebook_mode(connected=True)
data1 = dict(
visible = True,
x = B1Range,
y = mean_T1_noB1Correction,
name = 'Nominal flip angles',
text = 'Nominal flip angles',
mode = 'lines+markers',
hoverinfo = 'x+y+text',
line = dict(
color = ('rgb(22, 96, 167)'),
),
error_y=dict(
type='data',
array=std_T1_noB1Correction,
visible=True,
color = ('rgb(142, 192, 240)')
))
data2 = dict(
visible = True,
x = B1Range,
y = mean_T1_withB1Correction,
name = 'B<sub>1</sub>-corrected flip angles',
text = 'B<sub>1</sub>-corrected flip angles',
mode = 'lines+markers',
hoverinfo = 'x+y+text',
line = dict(
color = ('rgb(205, 12, 24)'),
),
error_y=dict(
type='data',
array=std_T1_withB1Correction,
visible=True,
color = ('rgb(248, 135, 142)')
))
data = [data1, data2]
layout = go.Layout(
width=580,
height=450,
margin=go.layout.Margin(
l=80,
r=80,
b=60,
t=60,
),
annotations=[
dict(
x=0.5004254919715793,
y=-0.14,
showarrow=False,
text='B<sub>1</sub> (n.u.)',
font=dict(
family='Times New Roman',
size=22
),
xref='paper',
yref='paper'
),
dict(
x=-0.17,
y=0.5,
showarrow=False,
text='T<sub>1</sub> (s)',
font=dict(
family='Times New Roman',
size=22
),
textangle=-90,
xref='paper',
yref='paper'
),
],
xaxis=dict(
autorange=False,
range=[B1Range[0], B1Range[-1]],
showgrid=False,
linecolor='black',
linewidth=2
),
yaxis=dict(
autorange=False,
range=[0, max(mean_T1_noB1Correction)],
showgrid=False,
linecolor='black',
linewidth=2
),
legend=dict(
x=0.32,
y=0.98,
traceorder='normal',
font=dict(
family='Times New Roman',
size=12,
color='#000'
),
bordercolor='#000000',
borderwidth=2
),
)
fig = dict(data=data, layout=layout)
iplot(fig, filename = 'basic-line', config = config)
###Output
_____no_output_____
###Markdown
Figure 7 displays an example VFA dataset and a B1 map in a healthy brain, along with the T1 map estimated using a linear fit (Equations 4 and 5). Figure 7. Example variable flip angle dataset and B1 map of a healthy adult brain (left). The relevant VFA protocol parameters used were: TR = 15 ms, θnominal = 3° and 20°. The T1 map (right) was fitted using a linear regression (Equations 4 and 5).
###Code
%% MATLAB/OCTAVE CODE
% Download variable flip angle brain MRI data for Figure 7 of the blog post
cmd = ['curl -L -o vfa_brain.zip https://osf.io/wj6eg/download/'];
[STATUS,MESSAGE] = unix(cmd);
unzip('vfa_brain.zip');
%% MATLAB/OCTAVE CODE
% Code used to generate the data required for Figure 5 of the blog post
clear all
% Load data into environment, and rotate mask to be aligned with IR data
load('VFAData.mat');
load('B1map.mat');
load('Mask.mat');
% Format qMRLab vfa_t1 model parameters, and load them into the Model object
Model = vfa_t1;
FlipAngle = [ 3; 20];
TR = [0.015; 0.0150];
Model.Prot.VFAData.Mat = [FlipAngle, TR];
% Format data structure so that they may be fit by the model
data = struct();
data.VFAData= double(VFAData);
data.B1map= double(B1map);
data.Mask= double(Mask);
FitResults = FitData(data,Model,0); % The '0' flag is so that no wait bar is shown.
%% MATLAB/OCTAVE CODE
% Code used to re-orient the images to make pretty figures, and to assign variables with the axis lengths.
T1_map = imrotate(FitResults.T1.*Mask,-90);
T1_map(T1_map>5)=0;
T1_map = T1_map*1000; % Convert to ms
xAxis = [0:size(T1_map,2)-1];
yAxis = [0:size(T1_map,1)-1];
% Raw MRI data at different TI values
FA_03 = imrotate(squeeze(VFAData(:,:,:,1).*Mask),-90);
FA_20 = imrotate(squeeze(VFAData(:,:,:,2).*Mask),-90);
B1map = imrotate(squeeze(B1map.*Mask),-90);
%get T1_map --from Octave
%get FA_03 --from Octave
%get FA_20 --from Octave
%get B1map --from Octave
%get xAxis --from Octave
%get yAxis --from Octave
from plotly import tools
trace1 = go.Heatmap(x = xAxis,
y = yAxis,
z=FA_03,
colorscale='Greys',
showscale = False,
visible=False,
name = 'Signal')
trace2 = go.Heatmap(x = xAxis,
y = yAxis,
z=FA_20,
colorscale='Greys',
showscale = False,
visible=True,
name = 'Signal')
trace3 = go.Heatmap(x = xAxis,
y = yAxis,
z=B1map,
zmin=0.7,
zmax=1.3,
colorscale='RdBu',
showscale = False,
visible=False,
name = 'B1 values')
trace5 = go.Heatmap(x = xAxis,
y = yAxis,
z=T1_map,
zmin=0.0,
zmax=5000,
colorscale='Portland',
xaxis='x2',
yaxis='y2',
visible=True,
name = 'T1 values (ms)')
data=[trace1, trace2, trace3, trace5]
updatemenus = list([
dict(active=1,
x = 0.09,
xanchor = 'left',
y = -0.15,
yanchor = 'bottom',
direction = 'up',
font=dict(
family='Times New Roman',
size=16
),
buttons=list([
dict(label = '3 deg',
method = 'update',
args = [{'visible': [True, False, False, True]},
]),
dict(label = '20 deg',
method = 'update',
args = [{'visible': [False, True, False, True]},
]),
dict(label = 'B<sub>1</sub> map',
method = 'update',
args = [{'visible': [False, False, True, True]},
])
])
)
])
layout = dict(
width=560,
height=345,
margin = dict(
t=40,
r=50,
b=10,
l=50),
annotations=[
dict(
x=0.055,
y=1.15,
showarrow=False,
text='Input Data',
font=dict(
family='Times New Roman',
size=26
),
xref='paper',
yref='paper'
),
dict(
x=0.6,
y=1.15,
showarrow=False,
text='T<sub>1</sub> map',
font=dict(
family='Times New Roman',
size=26
),
xref='paper',
yref='paper'
),
dict(
x=1.22,
y=1.15,
showarrow=False,
text='T<sub>1</sub> (ms)',
font=dict(
family='Times New Roman',
size=26
),
xref='paper',
yref='paper'
),
],
xaxis = dict(range = [0,127], autorange = False,
showgrid = False, zeroline = False, showticklabels = False,
ticks = '', domain=[0, 0.58]),
yaxis = dict(range = [0,127], autorange = False,
showgrid = False, zeroline = False, showticklabels = False,
ticks = '', domain=[0, 1]),
xaxis2 = dict(range = [0,127], autorange = False,
showgrid = False, zeroline = False, showticklabels = False,
ticks = '', domain=[0.40, 0.98]),
yaxis2 = dict(range = [0,127], autorange = False,
showgrid = False, zeroline = False, showticklabels = False,
ticks = '', domain=[0, 1], anchor='x2'),
showlegend = False,
autosize = False,
updatemenus=updatemenus
)
fig = dict(data=data, layout=layout)
iplot(fig, filename = 'basic-heatmap', config = config)
###Output
_____no_output_____
###Markdown
Benefits and Pitfalls It has been well reported in recent years that the accuracy of VFA T1 estimates is very sensitive to pulse sequence implementations (Stikov et al. 2015; Lutti & Weiskopf 2013; Baudrexel et al. 2018), and as such is less robust than the gold standard inversion recovery technique. In particular, the signal bias resulting from insufficient spoiling can result in inaccurate T1 estimates of up to 30% relative to inversion recovery estimated values (Stikov et al. 2015). VFA T1 map accuracy and precision is also strongly dependent on the quality of the measured B1 map (Lee et al. 2017), which can vary substantially between implementations (Boudreau et al. 2017). Modern rapid B1 mapping pulse sequences are not as widely available as VFA, resulting in some groups attempting alternative ways of removing the bias from the T1 maps like generating an artificial B1 map through the use of image processing techniques (Liberman et al. 2014) or omitting B1 correction altogether (Yuan et al. 2012). The latter is not recommended, because most MRI scanners have default pulse sequences that, with careful protocol settings, can provide B1 maps of sufficient quality very rapidly (Boudreau et al. 2017; Wang et al. 2005; Samson et al. 2006).Despite some drawbacks, VFA is still one of the most widely used T1 mapping methods in research. Its rapid acquisition time, rapid image processing time, and widespread availability makes it a great candidate for use in other quantitative imaging acquisition protocols like quantitative magnetization transfer imaging (Yarnykh 2002; Cercignani et al. 2005) and dynamic contrast enhanced imaging (Sung et al. 2013; Li et al. 2018). Works Cited Baudrexel, S. et al., 2018. T1 mapping with the variable flip angle technique: A simple correction for insufficient spoiling of transverse magnetization. Magn. Reson. Med., 79(6), pp.3082–3092.Bernstein, M., King, K. & Zhou, X., 2004. Handbook of MRI Pulse Sequences, Elsevier. Boudreau, M. et al., 2017. B1 mapping for bias-correction in quantitative T1 imaging of the brain at 3T using standard pulse sequences. J. Magn. Reson. Imaging, 46(6), pp.1673–1682.Cercignani, M. et al., 2005. Three-dimensional quantitative magnetisation transfer imaging of the human brain. Neuroimage, 27(2), pp.436–441. Chang, L.-C. et al., 2008. Linear least-squares method for unbiased estimation of T1 from SPGR signals. Magn. Reson. Med., 60(2), pp.496–501.Christensen, K.A. et al., 1974. Optimal determination of relaxation times of fourier transform nuclear magnetic resonance. Determination of spin-lattice relaxation times in chemically polarized species. J. Phys. Chem., 78(19), pp.1971–1977.Deoni, S.C.L., Rutt, B.K. & Peters, T.M., 2003. Rapid combined T1 and T2 mapping using gradient recalled acquisition in the steady state. Magn. Reson. Med., 49(3), pp.515–526.Ernst, R.R. & Anderson, W.A., 1966. Application of Fourier Transform Spectroscopy to Magnetic Resonance. Rev. Sci. Instrum., 37(1), pp.93–102.Fram, E.K. et al., 1987. Rapid calculation of T1 using variable flip angle gradient refocused imaging. Magn. Reson. Imaging, 5(3), pp.201–208.Gupta, R.K., 1977. A new look at the method of variable nutation angle for the measurement of spin-lattice relaxation times using fourier transform NMR. J. Magn. Reson., 25(1), pp.231–235.Homer, J. & Beevers, M.S., 1985. Driven-equilibrium single-pulse observation of T1 relaxation. A reevaluation of a rapid “new” method for determining NMR spin-lattice relaxation times. J. Magn. Reson., 63(2), pp.287–297.Lee, Y., Callaghan, M.F. & Nagy, Z., 2017. Analysis of the Precision of Variable Flip Angle T1 Mapping with Emphasis on the Noise Propagated from RF Transmit Field Maps. Front. Neurosci., 11, p.106.Liberman, G., Louzoun, Y. & Ben Bashat, D., 2014. T1 mapping using variable flip angle SPGR data with flip angle correction. J. Magn. Reson. Imaging, 40(1), pp.171–180.Li, Z.F. et al., 2018. A simple B1 correction method for dynamic contrast-enhanced MRI. Phys. Med. Biol., 63(16), p.16NT01.Lutti, A. & Weiskopf, N., 2013. Optimizing the accuracy of T1 mapping accounting for RF non-linearities and spoiling characteristics in FLASH imaging. In Proceedings of the 21st Annual Meeting of ISMRM, Salt Lake City, Utah, USA. p. 2478.Samson, R.S. et al., 2006. A simple correction for B1 field errors in magnetization transfer ratio measurements. Magn. Reson. Imaging, 24(3), pp.255–263.Schabel, M.C. & Morrell, G.R., 2009. Uncertainty in T1 mapping using the variable flip angle method with two flip angles. Phys. Med. Biol., 54(1), pp.N1–8.Sled, J.G. & Pike, G.B., 1998. Standing-wave and RF penetration artifacts caused by elliptic geometry: an electrodynamic analysis of MRI. IEEE Trans. Med. Imaging, 17(4), pp.653–662.Stikov, N. et al., 2015. On the accuracy of T1 mapping: Searching for common ground. Magn. Reson. Med., 73(2), pp.514–522.Sung, K., Daniel, B.L. & Hargreaves, B.A., 2013. Transmit B1+ field inhomogeneity and T1 estimation errors in breast DCE-MRI at 3 tesla. J. Magn. Reson. Imaging, 38(2), pp.454–459.Wang, J., Qiu, M. & Constable, R.T., 2005. In vivo method for correcting transmit/receive nonuniformities with phased array coils. Magn. Reson. Med., 53(3), pp.666–674.Yarnykh, V.L., 2010. Optimal radiofrequency and gradient spoiling for improved accuracy of T1 and B1 measurements using fast steady-state techniques. Magn. Reson. Med., 63(6), pp.1610–1626.Yarnykh, V.L., 2002. Pulsed Z-spectroscopic imaging of cross-relaxation parameters in tissues for human MRI: theory and clinical applications. Magn. Reson. Med., 47(5), pp.929–939.Yuan, J. et al., 2012. Quantitative evaluation of dual-flip-angle T1 mapping on DCE-MRI kinetic parameter estimation in head and neck. Quant. Imaging Med. Surg., 2(4), pp.245–253.Zur, Y., Wood, M.L. & Neuringer, L.J., 1991. Spoiling of transverse magnetization in steady-state sequences. Magn. Reson. Med., 21(2), pp.251–263.
###Code
# PYTHON CODE
display(HTML(
'<style type="text/css">'
'.output_subarea {'
'display: block;'
'margin-left: auto;'
'margin-right: auto;'
'}'
'.blog_body {'
'line-height: 2;'
'font-family: timesnewroman;'
'font-size: 18px;'
'margin-left: 0px;'
'margin-right: 0px;'
'}'
'.biblio_body {'
'line-height: 1.5;'
'font-family: timesnewroman;'
'font-size: 18px;'
'margin-left: 0px;'
'margin-right: 0px;'
'}'
'.note_body {'
'line-height: 1.25;'
'font-family: timesnewroman;'
'font-size: 18px;'
'margin-left: 0px;'
'margin-right: 0px;'
'color: #696969'
'}'
'.figure_caption {'
'line-height: 1.5;'
'font-family: timesnewroman;'
'font-size: 16px;'
'margin-left: 0px;'
'margin-right: 0px'
'</style>'
))
###Output
_____no_output_____
###Markdown
Welcome to a qMRLab interactive blog post Jupyter Notebook!If this is your first time running a Juptyer Notebook, there's a lot of tutorials available online to help. [Here's one](https://www.dataquest.io/blog/jupyter-notebook-tutorial/) for your convenience. IntroductionThis notebook contains everything needed to reproduce the Variable Flip Angle T1 blog post on the [qMRLab website](). In fact, this notebook generated the HTML for the blog post too! This notebook is currently running on a MyBinder server that only you can access, but if you want to be kept up-to-date on any changes that the developpers make to this notebook, you should go to it's [GitHub repository](https://github.com/qMRLab/t1_notebooks) and follow it by clicking the "Watch" button in the top right (you may need to create a GitHub account, if you don't have one already). TipsHere's a few things you can do in this notebook Code* Run the entire processing by clicking above on the "Kernel" tab, then "Restart & Run All". It will be complete when none of the cells have an asterix "\*" in the square brackets.* To change the code, you need to click once on code cells. To re-run that cell, click the "Run" button above when the cell is selected. * **Note:** Cells can depend on previous cells, or even on previous runs of the cell itself, so it's best to run all the previous cells beforehand.* This binder runs on SoS, which allows the mixing of Octave (i.e. an open-source MATLAB) and Python cells. Take a look a the drop down menu on the top right of the cells to know which one you are running.* To transfer data from cells of one language to another, you need to create a new cell in the incoming language and run `%get (param name) --from (outgoing language)`. See cells below for several examples within this notebook. HTML* To reproduce the HTML of the blog post, run the entire processing pipeline (see point one in the previous section), then save the notebook (save icon, top left). Now, click on the drop down menu on the left pannel, and select `%sossave --to html --force` . After a few seconds, it should output "Workflow saved to VariableFlipAngle.html" – click on the HTML name, and you're done!* Cells with tags called "scratch" are not displayed in the generated HTML.* Cells with the tag "report_output" display the output (e.g. figures) in the generated HTML.* Currently in an un-run notebook, the HTML is not formatted like the website. To do so, run the Python module import cell (` Module imports`) and then very last cell (`display(HTML(...)`).**If you have any other questions or comments, please raise them in a [GitHub issue](https://github.com/qMRLab/t1_notebooks/issues).** NoteThe following cell is meant to be displayed for instructional purposes in the blog post HTML when "All cells" gets displayed (i.e. the Octave code).
###Code
% **Blog post code introduction**
%
% Congrats on activating the "All cells" option in this interactive blog post =D
%
% Below, several new HTML blocks have appears prior to the figures, displaying the Octave/MATLAB code that was used to generate the figures in this blog post.
%
% If you want to reproduce the data on your own local computer, you simply need to have qMRLab installed in your Octave/MATLAB path and run the "startup.m" file, as is shown below.
%
% If you want to get under the hood and modify the code right now, you can do so in the Jupyter Notebook of this blog post hosted on MyBinder. The link to it is in the introduction above.
# PYTHON CODE
# Module imports
import matplotlib.pyplot as plt
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
config={'showLink': False, 'displayModeBar': False}
init_notebook_mode(connected=True)
from IPython.core.display import display, HTML
###Output
_____no_output_____
###Markdown
Variable Flip Angle T1 Mapping Variable flip angle (VFA) T1 mapping (Christensen et al. 1974; Gupta 1977; Fram et al. 1987), also known as Driven Equilibrium Single Pulse Observation of T1 (DESPOT1) (Homer & Beevers 1985; Deoni et al. 2003), is a rapid quantitative T1 measurement technique that is widely used to acquire 3D T1 maps (e.g. whole-brain) in a clinically feasible time. VFA estimates T1 values by acquiring multiple spoiled gradient echo acquisitions, each with different excitation flip angles (θn for n = 1, 2, .., N and θi ≠ θj). The steady-state signal of this pulse sequence (Figure 1) uses very short TRs (on the order of magnitude of 10 ms) and is very sensitive to T1 for a wide range of flip angles.VFA is a technique that originates from the NMR field, and was adopted because of its time efficiency and the ability to acquire accurate T1 values simultaneously for a wide range of values (Christensen et al. 1974; Gupta 1977). For imaging applications, VFA also benefits from an increase in SNR because it can be acquired using a 3D acquisition instead of multislice, which also helps to reduce slice profile effects. One important drawback of VFA for T1 mapping is that the signal is very sensitive to inaccuracies in the flip angle value, thus impacting the T1 estimates. In practice, the nominal flip angle (i.e. the value set at the scanner) is different than the actual flip angle experienced by the spins (e.g. at 3.0 T, variations of up to ±30%), an issue that increases with field strength. VFA typically requires the acquisition of another quantitative map, the transmit RF amplitude (B1+, or B1 for short), to calibrate the nominal flip angle to its actual value because of B1 inhomogeneities that occur in most loaded MRI coils (Sled & Pike 1998). The need to acquire an additional B1 map reduces the time savings offered by VFA over saturation-recovery techniques, and inaccuracies/imprecisions of the B1 map are also propagated into the VFA T1 map (Boudreau et al. 2017; Lee et al. 2017). Figure 1. Simplified pulse sequence diagram of a variable flip angle (VFA) pulse sequence with a gradient echo readout. TR: repetition time, θn: excitation flip angle for the nth measurement, IMG: image acquisition (k-space readout), SPOIL: spoiler gradient. Signal Modelling The steady-state longitudinal magnetization of an ideal variable flip angle experiment can be analytically solved from the Bloch equations for the spoiled gradient echo pulse sequence {θn–TR}:where Mz is the longitudinal magnetization, M0 is the magnetization at thermal equilibrium, TR is the pulse sequence repetition time (Figure 1), and θn is the excitation flip angle. The Mz curves of different T1 values for a range of θn and TR values are shown in Figure 2. Figure 2. Variable flip angle technique signal curves (Eq. 1) for three different T1 values, approximating the main types of tissue in the brain at 3T.
###Code
%% MATLAB/OCTAVE CODE
% Adds qMRLab to the path of the environment
cd ../qMRLab
startup
%% MATLAB/OCTAVE CODE
% Code used to generate the data required for Figure 4 of the blog post
clear all
%% Setup parameters
% All times are in milliseconds
% All flip angles are in degrees
TR_range = 5:5:200;
params.EXC_FA = 1:90;
%% Calculate signals
%
% To see all the options available, run `help vfa_t1.analytical_solution`
for ii = 1:length(TR_range)
params.TR = TR_range(ii);
% White matter
params.T1 = 900; % in milliseconds
signal_WM(ii,:) = vfa_t1.analytical_solution(params);
% Grey matter
params.T1 = 1500; % in milliseconds
signal_GM(ii,:) = vfa_t1.analytical_solution(params);
% CSF
params.T1 = 4000; % in milliseconds
signal_CSF(ii,:) = vfa_t1.analytical_solution(params);
end
%get params --from Octave
%get TR_range --from Octave
%get signal_WM --from Octave
%get signal_GM --from Octave
%get signal_CSF --from Octave
# PYTHON CODE
init_notebook_mode(connected=True)
data1 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_WM[ii]))),
name = 'T<sub>1</sub> = 0.9 s (White Matter)',
text = 'T<sub>1</sub> = 0.9 s (White Matter)',
hoverinfo = 'x+y+text') for ii in range(len(TR_range))]
data1[4]['visible'] = True
data2 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_GM[ii]))),
name = 'T<sub>1</sub> = 1.5 s (Grey Matter)',
text = 'T<sub>1</sub> = 1.5 s (Grey Matter)',
hoverinfo = 'x+y+text') for ii in range(len(TR_range))]
data2[4]['visible'] = True
data3 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_CSF[ii]))),
name = 'T<sub>1</sub> = 4.0 s (Cerebrospinal Fluid)',
text = 'T<sub>1</sub> = 4.0 s (Cerebrospinal Fluid)',
hoverinfo = 'x+y+text') for ii in range(len(TR_range))]
data3[4]['visible'] = True
data = data1 + data2 + data3
steps = []
for i in range(len(TR_range)):
step = dict(
method = 'restyle',
args = ['visible', [False] * len(data1)],
label = str(TR_range[i])
)
step['args'][1][i] = True # Toggle i'th trace to "visible"
steps.append(step)
sliders = [dict(
x = 0,
y = -0.02,
active = 2,
currentvalue = {"prefix": "TR value (ms): <b>"},
pad = {"t": 50, "b": 10},
steps = steps
)]
layout = go.Layout(
width=580,
height=450,
margin=go.layout.Margin(
l=80,
r=40,
b=60,
t=10,
),
annotations=[
dict(
x=0.5004254919715793,
y=-0.18,
showarrow=False,
text='Excitation Flip Angle (°)',
font=dict(
family='Times New Roman',
size=22
),
xref='paper',
yref='paper'
),
dict(
x=-0.15,
y=0.5,
showarrow=False,
text='Long. Magnetization (M<sub>z</sub>)',
font=dict(
family='Times New Roman',
size=22
),
textangle=-90,
xref='paper',
yref='paper'
),
],
xaxis=dict(
autorange=False,
range=[0, params['EXC_FA'][-1]],
showgrid=False,
linecolor='black',
linewidth=2
),
yaxis=dict(
autorange=True,
showgrid=False,
linecolor='black',
linewidth=2
),
legend=dict(
x=0.5,
y=0.9,
traceorder='normal',
font=dict(
family='Times New Roman',
size=12,
color='#000'
),
bordercolor='#000000',
borderwidth=2
),
sliders=sliders
)
fig = dict(data=data, layout=layout)
iplot(fig, filename = 'basic-line', config = config)
###Output
_____no_output_____
###Markdown
From Figure 2, it is clearly seen that the flip angle at which the steady-state signal is maximized is dependent on the T1 and TR values. This flip angle is a well known quantity, called the Ernst angle (Ernst & Anderson 1966), which can be solved analytically from Equation 1 using properties of calculus:The closed-form solution (Equation 1) makes several assumptions which in practice may not always hold true if care is not taken. Mainly, it is assumed that the longitudinal magnetization has reached a steady state after a large number of TRs, and that the transverse magnetization is perfectly spoiled at the end of each TR. Bloch simulations – a numerical approach at solving the Bloch equations for a set of spins at each time point – provide a more realistic estimate of the signal if the number of repetition times is small (i.e. a steady-state is not achieved). As can be seen from Figure 3, the number of repetitions required to reach a steady state not only depends on T1, but also on the flip angle; flip angles near the Ernst angle need more TRs to reach a steady state. Preparation pulses or an outward-in k-space acquisition pattern are typically sufficient to reach a steady state by the time that the center of k-space is acquired, which is where most of the image contrast resides. Figure 3. Signal curves simulated using Bloch simulations (orange) for a number of repetitions ranging from 1 to 150, plotted against the ideal case (Equation 1 – blue). Simulation details: TR = 25 ms, T1 = 900 ms, 100 spins. Ideal spoiling was used for this set of Bloch simulations (transverse magnetization was set to 0 at the end of each TR).
###Code
%% MATLAB/OCTAVE CODE
% Code used to generate the data required for Figure 4 of the blog post
clear all
%% Setup parameters
% All times are in milliseconds
% All flip angles are in degrees
% White matter
params.T1 = 900; % in milliseconds
params.T2 = 10000;
params.TR = 25;
params.TE = 5;
params.EXC_FA = 1:90;
Nex_range = 1:1:150;
%% Calculate signals
%
% To see all the options available, run `help vfa_t1.analytical_solution`
for ii = 1:length(Nex_range)
params.Nex = Nex_range(ii);
signal_analytical(ii,:) = vfa_t1.analytical_solution(params);
[~, complex_signal] = vfa_t1.bloch_sim(params);
signal_blochsim(ii,:) = abs(complex(complex_signal));
end
%get params --from Octave
%get Nex_range --from Octave
%get signal_analytical --from Octave
%get signal_blochsim --from Octave
# PYTHON CODE
init_notebook_mode(connected=True)
data1 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_analytical[ii]))),
name = 'Analytical Solution',
text = 'Analytical Solution',
hoverinfo = 'x+y+text') for ii in range(len(Nex_range))]
data1[49]['visible'] = True
data2 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_blochsim[ii]))),
name = 'Bloch Simulation',
text = 'Bloch Simulation',
hoverinfo = 'x+y+text') for ii in range(len(Nex_range))]
data2[49]['visible'] = True
data = data1 + data2
steps = []
for i in range(len(Nex_range)):
step = dict(
method = 'restyle',
args = ['visible', [False] * len(data1)],
label = str(Nex_range[i])
)
step['args'][1][i] = True # Toggle i'th trace to "visible"
steps.append(step)
sliders = [dict(
x = 0,
y = -0.02,
active = 49,
currentvalue = {"prefix": "n<sup>th</sup> TR: <b>"},
pad = {"t": 50, "b": 10},
steps = steps
)]
layout = go.Layout(
width=580,
height=450,
margin=go.layout.Margin(
l=80,
r=40,
b=60,
t=10,
),
annotations=[
dict(
x=0.5004254919715793,
y=-0.18,
showarrow=False,
text='Excitation Flip Angle (°)',
font=dict(
family='Times New Roman',
size=22
),
xref='paper',
yref='paper'
),
dict(
x=-0.15,
y=0.5,
showarrow=False,
text='Signal',
font=dict(
family='Times New Roman',
size=22
),
textangle=-90,
xref='paper',
yref='paper'
),
],
xaxis=dict(
autorange=False,
range=[0, params['EXC_FA'][-1]],
showgrid=False,
linecolor='black',
linewidth=2
),
yaxis=dict(
autorange=True,
showgrid=False,
linecolor='black',
linewidth=2
),
legend=dict(
x=0.5,
y=0.9,
traceorder='normal',
font=dict(
family='Times New Roman',
size=12,
color='#000'
),
bordercolor='#000000',
borderwidth=2
),
sliders=sliders
)
fig = dict(data=data, layout=layout)
iplot(fig, filename = 'basic-line', config = config)
###Output
_____no_output_____
###Markdown
Sufficient spoiling is likely the most challenging parameter to control for in a VFA experiment. A combination of both gradient spoiling and RF phase spoiling (Zur et al. 1991; Bernstein et al. 2004) are typically recommended (Figure 4). It has also been shown that the use of very strong gradients, introduces diffusion effects (not considered in Figure 4), further improving the spoiling efficacy in the VFA pulse sequence (Yarnykh 2010). Figure 4. Signal curves estimated using Bloch simulations for three categories of signal spoiling: (1) ideal spoiling (blue), gradient & RF Spoiling (orange), and no spoiling (green). Simulations details: TR = 25 ms, T1 = 900 ms, Te = 100 ms, TE = 5 ms, 100 spins. For the ideal spoiling case, the transverse magnetization is set to zero at the end of each TR. For the gradient & RF spoiling case, each spin is rotated by different increments of phase (2𝜋 / of spins) to simulate complete decoherence from gradient spoiling, and the RF phase of the excitation pulse is ɸn = ɸn-1 + nɸ0 = ½ ɸ0(n2 + n + 2) (Bernstein et al. 2004) with ɸ0 = 117° (Zur et al. 1991) after each TR.
###Code
%% MATLAB/OCTAVE CODE
% Code used to generate the data required for Figure 4 of the blog post
clear all
%% Setup parameters
% All times are in milliseconds
% All flip angles are in degrees
% White matter
params.T1 = 900; % in milliseconds
params.T2 = 100;
params.TR = 25;
params.TE = 5;
params.EXC_FA = 1:90;
Nex_range = [1:9, 10:10:100];
%% Calculate signals
%
% To see all the options available, run `help vfa_t1.analytical_solution`
for ii = 1:length(Nex_range)
params.Nex = Nex_range(ii);
params.crushFlag = 1;
[~, complex_signal] = vfa_t1.bloch_sim(params);
signal_ideal_spoil(ii,:) = abs(complex_signal);
params.inc = 117;
params.partialDephasing = 1;
params.partialDephasingFlag = 1;
params.crushFlag = 0;
[~, complex_signal] = vfa_t1.bloch_sim(params);
signal_optimal_crush_and_rf_spoil(ii,:) = abs(complex_signal);
params.inc = 0;
params.partialDephasing = 0;
[~, complex_signal] = vfa_t1.bloch_sim(params);
signal_no_gradient_and_rf_spoil(ii,:) = abs(complex_signal);
end
%get params --from Octave
%get Nex_range --from Octave
%get signal_ideal_spoil --from Octave
%get signal_optimal_crush_and_rf_spoil --from Octave
%get signal_no_gradient_and_rf_spoil --from Octave
# PYTHON CODE
init_notebook_mode(connected=True)
data1 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_ideal_spoil[ii]))),
name = 'Ideal Spoiling',
text = 'Ideal Spoiling',
hoverinfo = 'x+y+text') for ii in range(len(Nex_range))]
data1[10]['visible'] = True
data2 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_optimal_crush_and_rf_spoil[ii]))),
name = 'Gradient & RF Spoiling',
text = 'Gradient & RF Spoiling',
hoverinfo = 'x+y+text') for ii in range(len(Nex_range))]
data2[10]['visible'] = True
data3 = [dict(
visible = False,
mode = 'lines',
x = params["EXC_FA"],
y = abs(np.squeeze(np.asarray(signal_no_gradient_and_rf_spoil[ii]))),
name = 'No Spoiling',
text = 'No Spoiling',
hoverinfo = 'x+y+text') for ii in range(len(Nex_range))]
data3[10]['visible'] = True
data = data1 + data2+ data3
steps = []
for i in range(len(Nex_range)):
step = dict(
method = 'restyle',
args = ['visible', [False] * len(data1)],
label = str(Nex_range[i])
)
step['args'][1][i] = True # Toggle i'th trace to "visible"
steps.append(step)
sliders = [dict(
x = 0,
y = -0.02,
active = 10,
currentvalue = {"prefix": "n<sup>th</sup> TR: <b>"},
pad = {"t": 50, "b": 10},
steps = steps
)]
layout = go.Layout(
width=580,
height=450,
margin=go.layout.Margin(
l=80,
r=40,
b=60,
t=10,
),
annotations=[
dict(
x=0.5004254919715793,
y=-0.18,
showarrow=False,
text='Excitation Flip Angle (°)',
font=dict(
family='Times New Roman',
size=22
),
xref='paper',
yref='paper'
),
dict(
x=-0.15,
y=0.5,
showarrow=False,
text='Signal',
font=dict(
family='Times New Roman',
size=22
),
textangle=-90,
xref='paper',
yref='paper'
),
],
xaxis=dict(
autorange=False,
range=[0, params['EXC_FA'][-1]],
showgrid=False,
linecolor='black',
linewidth=2
),
yaxis=dict(
autorange=True,
showgrid=False,
linecolor='black',
linewidth=2
),
legend=dict(
x=0.5,
y=0.9,
traceorder='normal',
font=dict(
family='Times New Roman',
size=12,
color='#000'
),
bordercolor='#000000',
borderwidth=2
),
sliders=sliders
)
fig = dict(data=data, layout=layout)
iplot(fig, filename = 'basic-line', config = config)
###Output
_____no_output_____
###Markdown
Data Fitting At first glance, one could be tempted to fit VFA data using a non-linear least squares fitting algorithm such as Levenberg-Marquardt with Eq. 1, which typically only has two free fitting variables (T1 and M0). Although this is a valid way of estimating T1 from VFA data, it is rarely done in practice because a simple refactoring of Equation 1 allows T1 values to be estimated with a linear least square fitting algorithm, which substantially reduces the processing time. Without any approximations, Equation 1 can be rearranged into the form y = mx+b (Gupta 1977):As the third term does not change between measurements (it is constant for each θn), it can be grouped into the constant for a simpler representation:With this rearranged form of Equation 1, T1 can be simply estimated from the slope of a linear regression calculated from Sn/sin(θn) and Sn/tan(θn) values:If data were acquired using only two flip angles – a very common VFA acquisition protocol – then the slope can be calculated using the elementary slope equation. Figure 5 displays both Equation 1 and 4 plotted for a noisy dataset. Figure 5. Mean and standard deviation of the VFA signal plotted using the nonlinear form (Equation 1 – blue) and linear form (Equation 4 – red). Monte Carlo simulation details: SNR = 25, N = 1000. VFA simulation details: TR = 25 ms, T1 = 900 ms.
###Code
%% MATLAB/OCTAVE CODE
% Code used to generate the data required for Figure 4 of the blog post
clear all
%% Setup parameters
% All times are in milliseconds
% All flip angles are in degrees
params.EXC_FA = [1:4,5:5:90];
%% Calculate signals
%
% To see all the options available, run `help vfa_t1.analytical_solution`
params.TR = 0.025;
params.EXC_FA = [2:9,10:5:90];
% White matter
x.M0 = 1;
x.T1 = 0.900; % in milliseconds
Model = vfa_t1;
Opt.SNR = 25;
Opt.TR = params.TR;
Opt.T1 = x.T1;
clear Model.Prot.VFAData.Mat(:,1)
Model.Prot.VFAData.Mat = zeros(length(params.EXC_FA),2);
Model.Prot.VFAData.Mat(:,1) = params.EXC_FA';
Model.Prot.VFAData.Mat(:,2) = Opt.TR;
for jj = 1:1000
[FitResult{jj}, noisyData{jj}] = Model.Sim_Single_Voxel_Curve(x,Opt,0);
fittedT1(jj) = FitResult{jj}.T1;
noisyData_array(jj,:) = noisyData{jj}.VFAData;
noisyData_array_div_sin(jj,:) = noisyData_array(jj,:) ./ sind(Model.Prot.VFAData.Mat(:,1))';
noisyData_array_div_tan(jj,:) = noisyData_array(jj,:) ./ tand(Model.Prot.VFAData.Mat(:,1))';
end
for kk=1:length(params.EXC_FA)
data_mean(kk) = mean(noisyData_array(:,kk));
data_std(kk) = std(noisyData_array(:,kk));
data_mean_div_sin(kk) = mean(noisyData_array_div_sin(:,kk));
data_std_div_sin(kk) = std(noisyData_array_div_sin(:,kk));
data_mean_div_tan(kk) = mean(noisyData_array_div_tan(:,kk));
data_std_div_tan(kk) = std(noisyData_array_div_tan(:,kk));
end
%% Setup parameters
% All times are in milliseconds
% All flip angles are in degrees
params_highres.EXC_FA = [2:1:90];
%% Calculate signals
%
% To see all the options available, run `help vfa_t1.analytical_solution`
params_highres.TR = params.TR * 1000; % in milliseconds
% White matter
params_highres.T1 = x.T1*1000; % in milliseconds
signal_WM = vfa_t1.analytical_solution(params_highres);
signal_WM_div_sin = signal_WM ./ sind(params_highres.EXC_FA);
signal_WM_div_tan = signal_WM ./ tand(params_highres.EXC_FA);
%get params --from Octave
%get data_mean --from Octave
%get data_mean_div_sin --from Octave
%get data_mean_div_tan --from Octave
%get data_std --from Octave
%get data_std_div_sin --from Octave
%get data_std_div_tan --from Octave
%get params_highres --from Octave
%get signal_WM --from Octave
%get signal_WM_div_sin --from Octave
%get signal_WM_div_tan --from Octave
# PYTHON CODE
init_notebook_mode(connected=True)
data1 = dict(
visible = True,
x = params_highres["EXC_FA"],
y = signal_WM,
name = 'Analytical Solutions',
text = params["EXC_FA"],
mode = 'lines',
line = dict(
color = ('rgb(0, 0, 0)'),
dash = 'dot'),
hoverinfo='none')
data2 = dict(
visible = True,
x = signal_WM_div_tan,
y = signal_WM_div_sin,
name = 'Analytical Solutions',
text = params_highres["EXC_FA"],
mode = 'lines',
xaxis='x2',
yaxis='y2',
line = dict(
color = ('rgb(0, 0, 0)'),
dash = 'dot'
),
hoverinfo='none',
showlegend=False)
data3 = dict(
visible = True,
x = params["EXC_FA"],
y = data_mean,
name = 'Nonlinear Form - Noisy',
text = ["Flip angle: " + str(x) + "°" for x in params["EXC_FA"]],
mode = 'markers',
hoverinfo = 'y+text',
line = dict(
color = ('rgb(22, 96, 167)'),
),
error_y=dict(
type='data',
array=data_std,
visible=True,
color = ('rgb(142, 192, 240)')
))
data4 = dict(
visible = True,
x = data_mean_div_tan,
y = data_mean_div_sin,
name = 'Linear Form - Noisy',
text = ["Flip angle: " + str(x) + "°" for x in params["EXC_FA"]],
mode = 'markers',
xaxis='x2',
yaxis='y2',
hoverinfo = 'x+y+text',
line = dict(
color = ('rgb(205, 12, 24)'),
),
error_x=dict(
type='data',
array=data_std_div_tan,
visible=True,
color = ('rgb(248, 135, 142)')
),
error_y=dict(
type='data',
array=data_std_div_sin,
visible=True,
color = ('rgb(248, 135, 142)')
))
data = [data1, data2, data3, data4]
layout = go.Layout(
width=580,
height=450,
margin=go.layout.Margin(
l=80,
r=80,
b=60,
t=60,
),
annotations=[
dict(
x=0.5004254919715793,
y=-0.14,
showarrow=False,
text='Excitation Flip Angle (<i>θ<sub>n</sub></i>)',
font=dict(
family='Times New Roman',
size=22,
color=('rgb(21, 91, 158)')
),
xref='paper',
yref='paper'
),
dict(
x=-0.17,
y=0.5,
showarrow=False,
text='Signal (<i>S<sub>n</sub></i>)',
font=dict(
family='Times New Roman',
size=22,
color=('rgb(21, 91, 158)')
),
textangle=-90,
xref='paper',
yref='paper'
),
dict(
x=0.5004254919715793,
y=1.15,
showarrow=False,
text='<i>S<sub>n</sub></i> / tan(<i>θ<sub>n</sub></i>)',
font=dict(
family='Times New Roman',
size=22,
color=('rgb(169, 10, 20)')
),
xref='paper',
yref='paper'
),
dict(
x=1.16,
y=0.5,
showarrow=False,
text='<i>S<sub>n</sub></i> / sin(<i>θ<sub>n</sub></i>)',
font=dict(
family='Times New Roman',
size=22,
color=('rgb(169, 10, 20)')
),
xref='paper',
yref='paper',
textangle=-90,
),
],
xaxis=dict(
autorange=False,
range=[params['EXC_FA'][0], params['EXC_FA'][-1]],
showgrid=False,
linecolor='black',
linewidth=2
),
yaxis=dict(
autorange=True,
showgrid=False,
linecolor='black',
linewidth=2
),
xaxis2=dict(
autorange=False,
range=[0, 1],
showgrid=False,
mirror=True,
overlaying= 'x',
anchor= 'y2',
side= 'top',
linecolor='black',
linewidth=2
),
yaxis2=dict(
autorange=False,
range=[0, 1],
showgrid=False,
overlaying= 'y',
anchor= 'x',
side= 'right',
linecolor='black',
linewidth=2
),
legend=dict(
x=0.32,
y=0.98,
traceorder='normal',
font=dict(
family='Times New Roman',
size=12,
color='#000'
),
bordercolor='#000000',
borderwidth=2
),
)
fig = dict(data=data, layout=layout)
iplot(fig, filename = 'basic-line', config = config)
###Output
_____no_output_____
###Markdown
There are two important imaging protocol design considerations that should be taken into account when planning to use VFA: (1) how many and which flip angles to use to acquire VFA data, and (2) correcting inaccurate flip angles due to transmit RF field inhomogeneity. Most VFA experiments use the minimum number of required flip angles (two) to minimize acquisition time. For this case, it has been shown that the flip angle choice resulting in the best precision for VFA T1 estimates for a sample with a single T1 value (i.e. single tissue) are the two flip angles that result in 71% of the maximum possible steady-state signal (i.e. at the Ernst angle) (Deoni et al. 2003; Schabel & Morrell 2009).Time allowing, additional flip angles are often acquired at higher values and in between the two above, because greater signal differences between tissue T1 values are present there (e.g. Figure 2). Also, for more than two flip angles, Equations 1 and 4 do not have the same noise weighting for each fitting point, which may bias linear least-square T1 estimates at lower SNRs. Thus, it has been recommended that low SNR data should be fitted with either Equation 1 using non-linear least-squares (slower fitting) or with a weighted linear least-squares form of Equation 4 (Chang et al. 2008).Accurate knowledge of the flip angle values is very important to produce accurate T1 maps. Because of how the RF field interacts with matter (Sled & Pike 1998), the excitation RF field (B1+, or B1 for short) of a loaded RF coil results in spatial variations in intensity/amplitude, unless RF shimming is available to counteract this effect (not common at clinical field strengths). For quantitative measurements like VFA which are sensitive to this parameter, the flip angle can be corrected (voxelwise) relative to the nominal value by multiplying it with a scaling factor (B1) from a B1 map that is acquired during the same session:B1 in this context is normalized, meaning that it is unitless and has a value of 1 in voxels where the RF field has the expected amplitude (i.e. where the nominal flip angle is the actual flip angle). Figure 6 displays fitted VFA T1 values from a Monte Carlo dataset simulated using biased flip angle values, and fitted without/with B1 correction. Figure 6. Mean and standard deviations of fitted VFA T1 values for a set of Monte Carlo simulations (SNR = 100, N = 1000), simulated using a wide range of biased flip angles and fitted without (blue) or with (red) B1 correction. Simulation parameters: TR = 25 ms, T1 = 900 ms, θnominal = 6° and 32° (optimized values for this TR/T1 combination). Notice how even after B1 correction, fitted T1 values at B1 values far from the nominal case (B1 = 1) exhibit larger variance, as the actual flip angles of the simulated signal deviate from the optimal values for this TR/T1 (Deoni et al. 2003).
###Code
%% MATLAB/OCTAVE CODE
% Code used to generate the data required for Figure 4 of the blog post
clear all
%% Setup parameters
% All times are in seconds
% All flip angles are in degrees
params.TR = 0.025; % in seconds
% White matter
params.T1 = 0.900; % in seconds
% Calculate optimal flip angles for a two flip angle VFA experiment (for this T1 and TR)
% The will be the nominal flip angles (the flip angles assumed by the "user", before a
% "realistic"B1 bias is applied)
nominal_EXC_FA = vfa_t1.find_two_optimal_flip_angles(params); % in degrees
disp('Nominal flip angles:')
disp(nominal_EXC_FA)
% Range of B1 values biasing the excitation flip angle away from their nominal values
B1Range = 0.1:0.1:2;
x.M0 = 1;
x.T1 = params.T1; % in seconds
Model = vfa_t1;
Opt.SNR = 100;
Opt.TR = params.TR;
Opt.T1 = x.T1;
% Monte Carlo signal simulations
for ii = 1:1000
for jj = 1:length(B1Range)
B1 = B1Range(jj);
actual_EXC_FA = B1 * nominal_EXC_FA;
params.EXC_FA = actual_EXC_FA;
clear Model.Prot.VFAData.Mat(:,1)
Model.Prot.VFAData.Mat = zeros(length(params.EXC_FA),2);
Model.Prot.VFAData.Mat(:,1) = params.EXC_FA';
Model.Prot.VFAData.Mat(:,2) = Opt.TR;
[FitResult{ii,jj}, noisyData{ii,jj}] = Model.Sim_Single_Voxel_Curve(x,Opt,0);
noisyData_array(ii,jj,:) = noisyData{ii,jj}.VFAData;
end
end
%
Model = vfa_t1;
FlipAngle = nominal_EXC_FA';
TR = params.TR .* ones(size(FlipAngle));
Model.Prot.VFAData.Mat = [FlipAngle TR];
data.VFAData(:,:,1,1) = noisyData_array(:,:,1);
data.VFAData(:,:,1,2) = noisyData_array(:,:,2);
data.Mask = repmat(ones(size(B1Range)),[size(noisyData_array,1),1]);
data.B1map = repmat(ones(size(B1Range)),[size(noisyData_array,1),1]);
FitResults_noB1Correction = FitData(data,Model,0);
data.B1map = repmat(B1Range,[size(noisyData_array,1),1]);
FitResults_withB1Correction = FitData(data,Model,0);
%%
%
mean_T1_noB1Correction = mean(FitResults_noB1Correction.T1);
mean_T1_withB1Correction = mean(FitResults_withB1Correction.T1);
std_T1_noB1Correction = std(FitResults_noB1Correction.T1);
std_T1_withB1Correction = std(FitResults_withB1Correction.T1);
%get B1Range --from Octave
%get mean_T1_noB1Correction --from Octave
%get mean_T1_withB1Correction --from Octave
%get std_T1_noB1Correction --from Octave
%get std_T1_withB1Correction --from Octave
# PYTHON CODE
init_notebook_mode(connected=True)
data1 = dict(
visible = True,
x = B1Range,
y = mean_T1_noB1Correction,
name = 'Nominal flip angles',
text = 'Nominal flip angles',
mode = 'lines+markers',
hoverinfo = 'x+y+text',
line = dict(
color = ('rgb(22, 96, 167)'),
),
error_y=dict(
type='data',
array=std_T1_noB1Correction,
visible=True,
color = ('rgb(142, 192, 240)')
))
data2 = dict(
visible = True,
x = B1Range,
y = mean_T1_withB1Correction,
name = 'B<sub>1</sub>-corrected flip angles',
text = 'B<sub>1</sub>-corrected flip angles',
mode = 'lines+markers',
hoverinfo = 'x+y+text',
line = dict(
color = ('rgb(205, 12, 24)'),
),
error_y=dict(
type='data',
array=std_T1_withB1Correction,
visible=True,
color = ('rgb(248, 135, 142)')
))
data = [data1, data2]
layout = go.Layout(
width=580,
height=450,
margin=go.layout.Margin(
l=80,
r=80,
b=60,
t=60,
),
annotations=[
dict(
x=0.5004254919715793,
y=-0.14,
showarrow=False,
text='B<sub>1</sub> (n.u.)',
font=dict(
family='Times New Roman',
size=22
),
xref='paper',
yref='paper'
),
dict(
x=-0.17,
y=0.5,
showarrow=False,
text='T<sub>1</sub> (s)',
font=dict(
family='Times New Roman',
size=22
),
textangle=-90,
xref='paper',
yref='paper'
),
],
xaxis=dict(
autorange=False,
range=[B1Range[0], B1Range[-1]],
showgrid=False,
linecolor='black',
linewidth=2
),
yaxis=dict(
autorange=False,
range=[0, max(mean_T1_noB1Correction)],
showgrid=False,
linecolor='black',
linewidth=2
),
legend=dict(
x=0.32,
y=0.98,
traceorder='normal',
font=dict(
family='Times New Roman',
size=12,
color='#000'
),
bordercolor='#000000',
borderwidth=2
),
)
fig = dict(data=data, layout=layout)
iplot(fig, filename = 'basic-line', config = config)
###Output
_____no_output_____
###Markdown
Figure 7 displays an example VFA dataset and a B1 map in a healthy brain, along with the T1 map estimated using a linear fit (Equations 4 and 5). Figure 7. Example variable flip angle dataset and B1 map of a healthy adult brain (left). The relevant VFA protocol parameters used were: TR = 15 ms, θnominal = 3° and 20°. The T1 map (right) was fitted using a linear regression (Equations 4 and 5).
###Code
%% MATLAB/OCTAVE CODE
% Download variable flip angle brain MRI data for Figure 7 of the blog post
cmd = ['curl -L -o vfa_brain.zip https://osf.io/wj6eg/download/'];
[STATUS,MESSAGE] = unix(cmd);
unzip('vfa_brain.zip');
%% MATLAB/OCTAVE CODE
% Code used to generate the data required for Figure 5 of the blog post
clear all
% Load data into environment, and rotate mask to be aligned with IR data
load('VFAData.mat');
load('B1map.mat');
load('Mask.mat');
% Format qMRLab vfa_t1 model parameters, and load them into the Model object
Model = vfa_t1;
FlipAngle = [ 3; 20];
TR = [0.015; 0.0150];
Model.Prot.VFAData.Mat = [FlipAngle, TR];
% Format data structure so that they may be fit by the model
data = struct();
data.VFAData= double(VFAData);
data.B1map= double(B1map);
data.Mask= double(Mask);
FitResults = FitData(data,Model,0); % The '0' flag is so that no wait bar is shown.
%% MATLAB/OCTAVE CODE
% Code used to re-orient the images to make pretty figures, and to assign variables with the axis lengths.
T1_map = imrotate(FitResults.T1.*Mask,-90);
T1_map(T1_map>5)=0;
T1_map = T1_map*1000; % Convert to ms
xAxis = [0:size(T1_map,2)-1];
yAxis = [0:size(T1_map,1)-1];
% Raw MRI data at different TI values
FA_03 = imrotate(squeeze(VFAData(:,:,:,1).*Mask),-90);
FA_20 = imrotate(squeeze(VFAData(:,:,:,2).*Mask),-90);
B1map = imrotate(squeeze(B1map.*Mask),-90);
%get T1_map --from Octave
%get FA_03 --from Octave
%get FA_20 --from Octave
%get B1map --from Octave
%get xAxis --from Octave
%get yAxis --from Octave
from plotly import tools
trace1 = go.Heatmap(x = xAxis,
y = yAxis,
z=FA_03,
colorscale='Greys',
showscale = False,
visible=False,
name = 'Signal')
trace2 = go.Heatmap(x = xAxis,
y = yAxis,
z=FA_20,
colorscale='Greys',
showscale = False,
visible=True,
name = 'Signal')
trace3 = go.Heatmap(x = xAxis,
y = yAxis,
z=B1map,
zmin=0.7,
zmax=1.3,
colorscale='RdBu',
showscale = False,
visible=False,
name = 'B1 values')
trace5 = go.Heatmap(x = xAxis,
y = yAxis,
z=T1_map,
zmin=0.0,
zmax=5000,
colorscale='Portland',
xaxis='x2',
yaxis='y2',
visible=True,
name = 'T1 values (ms)')
data=[trace1, trace2, trace3, trace5]
updatemenus = list([
dict(active=1,
x = 0.09,
xanchor = 'left',
y = -0.15,
yanchor = 'bottom',
direction = 'up',
font=dict(
family='Times New Roman',
size=16
),
buttons=list([
dict(label = '3 deg',
method = 'update',
args = [{'visible': [True, False, False, True]},
]),
dict(label = '20 deg',
method = 'update',
args = [{'visible': [False, True, False, True]},
]),
dict(label = 'B<sub>1</sub> map',
method = 'update',
args = [{'visible': [False, False, True, True]},
])
])
)
])
layout = dict(
width=560,
height=345,
margin = dict(
t=40,
r=50,
b=10,
l=50),
annotations=[
dict(
x=0.055,
y=1.15,
showarrow=False,
text='Input Data',
font=dict(
family='Times New Roman',
size=26
),
xref='paper',
yref='paper'
),
dict(
x=0.6,
y=1.15,
showarrow=False,
text='T<sub>1</sub> map',
font=dict(
family='Times New Roman',
size=26
),
xref='paper',
yref='paper'
),
dict(
x=1.22,
y=1.15,
showarrow=False,
text='T<sub>1</sub> (ms)',
font=dict(
family='Times New Roman',
size=26
),
xref='paper',
yref='paper'
),
],
xaxis = dict(range = [0,127], autorange = False,
showgrid = False, zeroline = False, showticklabels = False,
ticks = '', domain=[0, 0.58]),
yaxis = dict(range = [0,127], autorange = False,
showgrid = False, zeroline = False, showticklabels = False,
ticks = '', domain=[0, 1]),
xaxis2 = dict(range = [0,127], autorange = False,
showgrid = False, zeroline = False, showticklabels = False,
ticks = '', domain=[0.40, 0.98]),
yaxis2 = dict(range = [0,127], autorange = False,
showgrid = False, zeroline = False, showticklabels = False,
ticks = '', domain=[0, 1], anchor='x2'),
showlegend = False,
autosize = False,
updatemenus=updatemenus
)
fig = dict(data=data, layout=layout)
iplot(fig, filename = 'basic-heatmap', config = config)
###Output
_____no_output_____
###Markdown
Benefits and Pitfalls It has been well reported in recent years that the accuracy of VFA T1 estimates is very sensitive to pulse sequence implementations (Stikov et al. 2015; Lutti & Weiskopf 2013; Baudrexel et al. 2018), and as such is less robust than the gold standard inversion recovery technique. In particular, the signal bias resulting from insufficient spoiling can result in inaccurate T1 estimates of up to 30% relative to inversion recovery estimated values (Stikov et al. 2015). VFA T1 map accuracy and precision is also strongly dependent on the quality of the measured B1 map (Lee et al. 2017), which can vary substantially between implementations (Boudreau et al. 2017). Modern rapid B1 mapping pulse sequences are not as widely available as VFA, resulting in some groups attempting alternative ways of removing the bias from the T1 maps like generating an artificial B1 map through the use of image processing techniques (Liberman et al. 2014) or omitting B1 correction altogether (Yuan et al. 2012). The latter is not recommended, because most MRI scanners have default pulse sequences that, with careful protocol settings, can provide B1 maps of sufficient quality very rapidly (Boudreau et al. 2017; Wang et al. 2005; Samson et al. 2006).Despite some drawbacks, VFA is still one of the most widely used T1 mapping methods in research. Its rapid acquisition time, rapid image processing time, and widespread availability makes it a great candidate for use in other quantitative imaging acquisition protocols like quantitative magnetization transfer imaging (Yarnykh 2002; Cercignani et al. 2005) and dynamic contrast enhanced imaging (Sung et al. 2013; Li et al. 2018). Works Cited Baudrexel, S. et al., 2018. T1 mapping with the variable flip angle technique: A simple correction for insufficient spoiling of transverse magnetization. Magn. Reson. Med., 79(6), pp.3082–3092.Bernstein, M., King, K. & Zhou, X., 2004. Handbook of MRI Pulse Sequences, Elsevier. Boudreau, M. et al., 2017. B1 mapping for bias-correction in quantitative T1 imaging of the brain at 3T using standard pulse sequences. J. Magn. Reson. Imaging, 46(6), pp.1673–1682.Cercignani, M. et al., 2005. Three-dimensional quantitative magnetisation transfer imaging of the human brain. Neuroimage, 27(2), pp.436–441. Chang, L.-C. et al., 2008. Linear least-squares method for unbiased estimation of T1 from SPGR signals. Magn. Reson. Med., 60(2), pp.496–501.Christensen, K.A. et al., 1974. Optimal determination of relaxation times of fourier transform nuclear magnetic resonance. Determination of spin-lattice relaxation times in chemically polarized species. J. Phys. Chem., 78(19), pp.1971–1977.Deoni, S.C.L., Rutt, B.K. & Peters, T.M., 2003. Rapid combined T1 and T2 mapping using gradient recalled acquisition in the steady state. Magn. Reson. Med., 49(3), pp.515–526.Ernst, R.R. & Anderson, W.A., 1966. Application of Fourier Transform Spectroscopy to Magnetic Resonance. Rev. Sci. Instrum., 37(1), pp.93–102.Fram, E.K. et al., 1987. Rapid calculation of T1 using variable flip angle gradient refocused imaging. Magn. Reson. Imaging, 5(3), pp.201–208.Gupta, R.K., 1977. A new look at the method of variable nutation angle for the measurement of spin-lattice relaxation times using fourier transform NMR. J. Magn. Reson., 25(1), pp.231–235.Homer, J. & Beevers, M.S., 1985. Driven-equilibrium single-pulse observation of T1 relaxation. A reevaluation of a rapid “new” method for determining NMR spin-lattice relaxation times. J. Magn. Reson., 63(2), pp.287–297.Lee, Y., Callaghan, M.F. & Nagy, Z., 2017. Analysis of the Precision of Variable Flip Angle T1 Mapping with Emphasis on the Noise Propagated from RF Transmit Field Maps. Front. Neurosci., 11, p.106.Liberman, G., Louzoun, Y. & Ben Bashat, D., 2014. T1 mapping using variable flip angle SPGR data with flip angle correction. J. Magn. Reson. Imaging, 40(1), pp.171–180.Li, Z.F. et al., 2018. A simple B1 correction method for dynamic contrast-enhanced MRI. Phys. Med. Biol., 63(16), p.16NT01.Lutti, A. & Weiskopf, N., 2013. Optimizing the accuracy of T1 mapping accounting for RF non-linearities and spoiling characteristics in FLASH imaging. In Proceedings of the 21st Annual Meeting of ISMRM, Salt Lake City, Utah, USA. p. 2478.Samson, R.S. et al., 2006. A simple correction for B1 field errors in magnetization transfer ratio measurements. Magn. Reson. Imaging, 24(3), pp.255–263.Schabel, M.C. & Morrell, G.R., 2009. Uncertainty in T1 mapping using the variable flip angle method with two flip angles. Phys. Med. Biol., 54(1), pp.N1–8.Sled, J.G. & Pike, G.B., 1998. Standing-wave and RF penetration artifacts caused by elliptic geometry: an electrodynamic analysis of MRI. IEEE Trans. Med. Imaging, 17(4), pp.653–662.Stikov, N. et al., 2015. On the accuracy of T1 mapping: Searching for common ground. Magn. Reson. Med., 73(2), pp.514–522.Sung, K., Daniel, B.L. & Hargreaves, B.A., 2013. Transmit B1+ field inhomogeneity and T1 estimation errors in breast DCE-MRI at 3 tesla. J. Magn. Reson. Imaging, 38(2), pp.454–459.Wang, J., Qiu, M. & Constable, R.T., 2005. In vivo method for correcting transmit/receive nonuniformities with phased array coils. Magn. Reson. Med., 53(3), pp.666–674.Yarnykh, V.L., 2010. Optimal radiofrequency and gradient spoiling for improved accuracy of T1 and B1 measurements using fast steady-state techniques. Magn. Reson. Med., 63(6), pp.1610–1626.Yarnykh, V.L., 2002. Pulsed Z-spectroscopic imaging of cross-relaxation parameters in tissues for human MRI: theory and clinical applications. Magn. Reson. Med., 47(5), pp.929–939.Yuan, J. et al., 2012. Quantitative evaluation of dual-flip-angle T1 mapping on DCE-MRI kinetic parameter estimation in head and neck. Quant. Imaging Med. Surg., 2(4), pp.245–253.Zur, Y., Wood, M.L. & Neuringer, L.J., 1991. Spoiling of transverse magnetization in steady-state sequences. Magn. Reson. Med., 21(2), pp.251–263.
###Code
# PYTHON CODE
display(HTML(
'<style type="text/css">'
'.output_subarea {'
'display: block;'
'margin-left: auto;'
'margin-right: auto;'
'}'
'.blog_body {'
'line-height: 2;'
'font-family: timesnewroman;'
'font-size: 18px;'
'margin-left: 0px;'
'margin-right: 0px;'
'}'
'.biblio_body {'
'line-height: 1.5;'
'font-family: timesnewroman;'
'font-size: 18px;'
'margin-left: 0px;'
'margin-right: 0px;'
'}'
'.note_body {'
'line-height: 1.25;'
'font-family: timesnewroman;'
'font-size: 18px;'
'margin-left: 0px;'
'margin-right: 0px;'
'color: #696969'
'}'
'.figure_caption {'
'line-height: 1.5;'
'font-family: timesnewroman;'
'font-size: 16px;'
'margin-left: 0px;'
'margin-right: 0px'
'</style>'
))
###Output
_____no_output_____ |
data/notebooks/data_cleaner.ipynb | ###Markdown
Data cleanerThis notebook contains code for cleaning data. Mainly dropping unused columns andremoving unnecessary content from the lyrics.
###Code
import pandas as pd
dataset_list = ["train.csv", "test.csv", "validation.csv"]
# Remove the This Lyrics is NOT for Commercial use portion of lyrics
# NOTE: Don't run this as it has already been done.
# text_to_remove = train["Lyrics"][0][-79:]
# def remove_text(data):
# """
# Remove unnecessary text above from the lyrics
# :param data: dataset
# :return: cleaned dataset
# """
# for i in range(len(data)):
# data["lyrics"].iloc[i] = data["lyrics"].iloc[i].replace(text_to_remove, "")
def data_cleaner(dataset):
"""
Cleans the data.
:param dataset: path to data
:return: cleaned data
"""
data = pd.read_csv(dataset)
# dropping NAN values
data = data.dropna()
# Drop columns no longer needed
# columns_to_drop = ["MSD_sng_id","MSD_track_id"]
columns_to_drop = ["dzr_sng_id", "valence", "arousal", "artist_name", "track_name"]
data.drop(columns_to_drop, axis=1, inplace=True)
# Remove the "This Lyrics is NOT for Commercial use" portion of lyrics
# remove_text(data)
return data
for data in dataset_list:
df = data_cleaner(data)
df.to_csv(data, index=False)
###Output
_____no_output_____ |
tf2/ex1_1/TF2_ex_1_1.ipynb | ###Markdown
Keras in Tensorflow 2.0
###Code
from tensorflow import keras
import numpy
x = numpy.array([0, 1, 2, 3, 4])
y = x * 2 + 1
model = keras.models.Sequential()
model.add(keras.layers.Dense(1,input_shape=(1,)))
model.compile('SGD', 'mse')
model.fit(x[:2], y[:2], epochs=1000, verbose=0)
print(model.predict(x))
###Output
[[1.000948]
[2.999414]
[4.99788 ]
[6.996346]
[8.994812]]
###Markdown
Tensorflow 2.0 with Keras IO Simple version
###Code
import tensorflow as tf2
import numpy as np
x = np.array([0, 1, 2, 3, 4]).astype('float32').reshape(-1,1)
y = x * 2 + 1
model = tf2.keras.Sequential()
model.add(tf2.keras.layers.Dense(1, input_dim = 1))
model.build()
Optimizer = tf2.keras.optimizers.Adam(learning_rate = 0.01)
for epoch in range(1000):
with tf2.GradientTape() as tape:
y_pr = model(x[:2,:1])
loss = tf2.keras.losses.mean_squared_error(y[:2,:1], y_pr)
gradients = tape.gradient(loss, model.trainable_variables)
Optimizer.apply_gradients(zip(gradients, model.trainable_variables))
print(model.predict(x))
###Output
[[1.0032865]
[2.9982057]
[4.9931245]
[6.988044 ]
[8.982963 ]]
###Markdown
Detail version with monitoring variables
###Code
import tensorflow as tf2
import numpy as np
x = np.array([0, 1, 2, 3, 4]).astype('float32').reshape(-1,1)
y = x * 2 + 1
model = tf2.keras.Sequential()
model.add(tf2.keras.layers.Dense(1, input_dim = 1))
model.build()
print('w=', model.trainable_variables[0].numpy(), 'b=', model.trainable_variables[1].numpy())
print()
Optimizer = tf2.keras.optimizers.Adam(learning_rate = 0.01)
for epoch in range(1000):
with tf2.GradientTape() as tape:
y_pr = model(x[:2,:1])
loss = tf2.keras.losses.mean_squared_error(y[:2,:1], y_pr)
if epoch < 3:
print(f'Epoch:{epoch}')
print('y_pr:', y_pr.numpy())
print('y_tr:', y[:2,:1])
print('loss:', loss.numpy())
print()
gradients = tape.gradient(loss, model.trainable_variables)
Optimizer.apply_gradients(zip(gradients, model.trainable_variables))
print(model.predict(x))
###Output
w= [[-0.9424849]] b= [0.]
Epoch:0
y_pr: [[ 0. ]
[-0.9424849]]
y_tr: [[1.]
[3.]]
loss: [ 1. 15.543187]
Epoch:1
y_pr: [[ 0.01 ]
[-0.92248493]]
y_tr: [[1.]
[3.]]
loss: [ 0.98010004 15.385887 ]
Epoch:2
y_pr: [[ 0.01999833]
[-0.902488 ]]
y_tr: [[1.]
[3.]]
loss: [ 0.9604033 15.229412 ]
[[1.0694708]
[2.9587076]
[4.8479443]
[6.737181 ]
[8.626418 ]]
###Markdown
Ex 1-1 by Keras in Tensflow 2.0 Keras가 이제 텐서플로의 기본 상위 인터페이스가 되었다. 다시 말해 텐서플로에서 인공지능 코드 작성시 케라스를 기본적으로 사용할 수 있게 되었다는 말이다. Keras를 텐서플로에서 사용하는 방법은 크게 두가지가 있다. 첫 번째는 오리지널 케라스 방식처럼 케라스를 주 인터페이스로 사용하고 텐서풀로를 백앤드 인공지능 엔진으로 사용하는 방법이다. 이를 텐서플로 2.0 기반 케라스 사용법(Keras in Tensorflow 2.0)이라 하자. 두 번째는 텐서플로로 인공지능 코드를 작성할 때 케라스를 이용하는 방법이다. 이를 케라스 인터페이스를 사용하는 텐서플로 2.0 사용법(Tensorflow 2.0 with Keras IO)이라 하자. 본 책의 9장에도 그와 유사한 접근이 소개되어 있지만 그 때는 둘을 섞어서 사용하는 정도였고 이번에는 기본적으로 텐서플로내에서 케라스를 지원하는 단계가 되었기 때문에 훨씬 편리하고 강력하게 둘이 융합되었다. 첫 번째 방법은 편리함에 방점이 있고 두 번째 방법은 강력함에 방점이 있다. 여기서는 두 가지 방법을 모두 소개한다. I. 텐서플로 2.0 기반 케라스 사용법(Keras in Tensorflow 2.0)
###Code
from tensorflow import keras
import numpy
x = numpy.array([0, 1, 2, 3, 4])
y = x * 2 + 1
model = keras.models.Sequential()
model.add(keras.layers.Dense(1,input_shape=(1,)))
model.compile('SGD', 'mse')
model.fit(x[:2], y[:2], epochs=1000, verbose=0)
print(model.predict(x))
###Output
[[1.000948]
[2.999414]
[4.99788 ]
[6.996346]
[8.994812]]
###Markdown
II. 케라스 인터페이스를 사용하는 텐서플로 2.0 사용법(Tensorflow 2.0 with Keras IO) 간단한 구성
###Code
import tensorflow as tf2
import numpy as np
x = np.array([0, 1, 2, 3, 4]).astype('float32').reshape(-1,1)
y = x * 2 + 1
model = tf2.keras.Sequential()
model.add(tf2.keras.layers.Dense(1, input_dim = 1))
model.build()
Optimizer = tf2.keras.optimizers.Adam(learning_rate = 0.01)
for epoch in range(1000):
with tf2.GradientTape() as tape:
y_pr = model(x[:2,:1])
loss = tf2.keras.losses.mean_squared_error(y[:2,:1], y_pr)
gradients = tape.gradient(loss, model.trainable_variables)
Optimizer.apply_gradients(zip(gradients, model.trainable_variables))
print(model.predict(x))
###Output
[[1.0032865]
[2.9982057]
[4.9931245]
[6.988044 ]
[8.982963 ]]
###Markdown
간단한 구성에 진행 결과 보이기
###Code
import tensorflow as tf2
import numpy as np
x = np.array([0, 1, 2, 3, 4]).astype('float32').reshape(-1,1)
y = x * 2 + 1
model = tf2.keras.Sequential()
model.add(tf2.keras.layers.Dense(1, input_dim = 1))
model.build()
print('w=', model.trainable_variables[0].numpy(), 'b=', model.trainable_variables[1].numpy())
print()
Optimizer = tf2.keras.optimizers.Adam(learning_rate = 0.01)
for epoch in range(1000):
with tf2.GradientTape() as tape:
y_pr = model(x[:2,:1])
loss = tf2.keras.losses.mean_squared_error(y[:2,:1], y_pr)
if epoch < 3:
print(f'Epoch:{epoch}')
print('y_pr:', y_pr.numpy())
print('y_tr:', y[:2,:1])
print('loss:', loss.numpy())
print()
gradients = tape.gradient(loss, model.trainable_variables)
Optimizer.apply_gradients(zip(gradients, model.trainable_variables))
print(model.predict(x))
###Output
w= [[-1.0957096]] b= [0.]
Epoch:0
y_pr: [[ 0. ]
[-1.0957096]]
y_tr: [[1.]
[3.]]
loss: [ 1. 16.77484]
Epoch:1
y_pr: [[ 0.01 ]
[-1.0757096]]
y_tr: [[1.]
[3.]]
loss: [ 0.98010004 16.611406 ]
Epoch:2
y_pr: [[ 0.01999838]
[-1.0557125 ]]
y_tr: [[1.]
[3.]]
loss: [ 0.9604032 16.448805 ]
[[1.0864077]
[2.9482875]
[4.8101673]
[6.672047 ]
[8.533927 ]]
###Markdown
클래스를 이용한 네트웍 모델 구성하기케라스로 모델을 만들 때 클래스를 이용해 만들 수 있다. 파이토치에서 사용하는 방법이지만 케라스에서도 사용이 가능하다. 이 경우는 뉴럴넷의 각 계층의 구성과 계층간의 연결을 구분해서 작성이 가능해 이해에 도움이 되어 복잡한 네트웍 구성시 도움이 되는 방법이다. 여기서는 클래스를 사용해서 케라스 모델을 만드는 방법을 사용한다. 파이토치 구현을 이해하고 케라스 구현과 비교하는데도 도움이 되리라 보인다.모델을 model = Model()로 구성하고 별도의 컴파일이나 빌드 과정이 없다. 텐서플로 방식으로 사용하는 경우는 케라스가 모델을 처음 사용하는 시점에서 자동으로 구성을 하기 때문이다. 여기서는 y_pr = model(x[:2,:1])을 수행하는 시점이다. 그 전에는 모델이 빌드되지 않았기 때문에 구조를 model.summary()로 확인할 수 없다. 그렇지만 모델이 사용된 후에는 구성을 볼 수 있다. 이점은 컴파일 과정이 반드시 수반되어야 하는 케라스로 만 사용하는 경우와 다르다. 그 경우는 컴파일을 하고 나면 네트웍 구조를 확인 할 수 있게 된다.
###Code
import tensorflow as tf2
from tensorflow import keras
import numpy as np
x = np.array([0, 1, 2, 3, 4]).astype('float32').reshape(-1,1)
y = x * 2 + 1
class Model(keras.models.Model):
def __init__(self):
super().__init__()
# self.layer = keras.layers.Dense(1, input_shape=[None,1])
self.layer = keras.layers.Dense(1, input_dim=1)
def call(self, x):
return self.layer(x)
model = Model()
Optimizer = tf2.keras.optimizers.Adam(learning_rate = 0.01)
for epoch in range(1000):
with tf2.GradientTape() as tape:
y_pr = model(x[:2,:1])
loss = tf2.keras.losses.mean_squared_error(y[:2,:1], y_pr)
gradients = tape.gradient(loss, model.trainable_variables)
Optimizer.apply_gradients(zip(gradients, model.trainable_variables))
print(model.predict(x))
###Output
[[1.0013205]
[2.9992943]
[4.9972677]
[6.9952416]
[8.993216 ]]
|
neural_network/data.ipynb | ###Markdown
Correlation of metricsEmotion - Moderate-Highcorrelation with engagement, behaviors and attentivenessHeadGaze - Moderate-High Correlation, facing front fairly justify attentivness, looking away is obvious disengagementMotion - Low-Moderate Correlation, increasing micro-motions shows correlation with attentivness (Reseach Article)HandPose - High Correlation, raising hands definitely engagement with current session SleepPose - High correlation, detected = not engagedCorrelation ranking1. HandPose2. SleepPose3. Emotion4. HeadGaze5. Motion Futhermore, we split the various metrices into their individual components/ranking1. HandPose (1 if raised, 0 if not)2. SleepPose (1 if sleep, 0 if not)3. Emotion a. Happy b. Angry c. Disgusted d. Fearful e. Neutral f. Sad g. Surprised 4. HeadGaze (2 if facing front, 1 if sideways, 0 if back)5. Motion tbc
###Code
categories = ['Happy', 'Angry', 'Disgusted','Sad', 'Neutral', 'HandRaised', 'Sleep', 'HeadGaze', 'Engagement']
emo_list = ['Happy', 'Angry', 'Disgusted', 'Sad', 'Neutral']
df = pd.DataFrame(index = range(0,3000), columns = categories)
def inputEmotions(df, emo_list):
for row in range(3000):
#generate a random probability function (array) which adds up to one, returns a list of list after .tolist()
randArray = np.array(np.random.dirichlet(np.ones(5), size = 1)).tolist()[0]
for i in range(5):
df[emo_list[i]].iloc[row] = randArray[i]
#we split the categories here carefully because they shouldnt conflict
def _handRaised(x):
return 1
def _handMid(x):
return random.randint(0,1)
def _handNotRaised(x):
return 0
def _handRandom(x):
return random.randint(0,1)
def _sleep(x):
return 1
def _notSleep(x):
return 0
def _sleepRandom(x):
return random.randint(0,1)
def _headFront(x):
return 2
def _headModerate(x):
return random.randint(1,2)
def _headNotFront(x):
return random.randint(0,1)
def _headRandom(x):
return random.randint(0,2)
def _engagedHigh(x):
return random.randint(2,3)
def _engagedMid(x):
return random.randint(1,2)
def _engagedLow(x):
return random.randint(1,1)
def _engagedNoise(x):
return random.randint(1,3)
###Output
_____no_output_____
###Markdown
Synthetic Data1. We work with 3000 training examples with this neural network2. Assuming 10% noise in data, we work with 2700 reliable datasets, 300 noise datsets (random labels)3. Breaking down the 2700 training examples we break into high, middle and low tier ratings with a few simple rules in place to guide the model to learn1. high - high happy probability, moderate-low angry/disgusted, handraised 1, sleep 0, headgaze 22. mid - moderate emotions probability, handraise random, sleep 0, headgaze 1-23. low - low happy probability, moderate-high angry/disgusted, handraised 0, sleep 0/1, headgaze 0,14. noise - all are noise5. engagement - we will take softmax of range 1 to 5, 4-5 being most engaged, 2-3 being mildly engaged, 1 least engaged
###Code
inputEmotions(df , emo_list)
#sort df by happy vs not happy
df = df.sort_values(by = ['Happy'], ascending = False)
df.head()
###Output
_____no_output_____
###Markdown
We will sort 900 rows of each tier (high, mid, low), along with 100 each of noise into the NN
###Code
#High
df['HandRaised'][:900] = df['HandRaised'][:900].apply(lambda x: _handRaised(x))
df['HandRaised'][900:1000] = df['HandRaised'][900:1000].apply(lambda x: _handRandom(x))
df['Sleep'][:900] = df['Sleep'][:900].apply(lambda x: _notSleep(x))
df['Sleep'][900:1000] = df['Sleep'][900:1000].apply(lambda x: _sleepRandom(x))
df['HeadGaze'][:900] = df['HeadGaze'][:900].apply(lambda x: _headFront(x))
df['HeadGaze'][900:1000] = df['HeadGaze'][900:1000].apply(lambda x: _headRandom(x))
df['Engagement'][:900] = df['Engagement'][:900].apply(lambda x: _engagedHigh(x))
df['Engagement'][900:1000] = df['Engagement'][900:1000].apply(lambda x: _engagedNoise(x))
#Mid
df['HandRaised'][1000:1900] = df['HandRaised'][1000:1900].apply(lambda x: _handMid(x))
df['HandRaised'][1900:2000] = df['HandRaised'][1900:2000].apply(lambda x: _handRandom(x))
df['Sleep'][1000:1900] = df['Sleep'][1000:1900].apply(lambda x: _notSleep(x))
df['Sleep'][1900:2000] = df['Sleep'][1900:2000].apply(lambda x: _sleepRandom(x))
df['HeadGaze'][1000:1900] = df['HeadGaze'][1000:1900].apply(lambda x: _headModerate(x))
df['HeadGaze'][1900:2000] = df['HeadGaze'][1900:2000].apply(lambda x: _headRandom(x))
df['Engagement'][1000:1900] = df['Engagement'][1000:1900].apply(lambda x: _engagedMid(x))
df['Engagement'][1900:2000] = df['Engagement'][1900:2000].apply(lambda x: _engagedNoise(x))
#Low
df['HandRaised'][2000:2900] = df['HandRaised'][2000:2900].apply(lambda x: _handNotRaised(x))
df['HandRaised'][2900:3000] = df['HandRaised'][2900:3000].apply(lambda x: _handNotRaised(x))
df['Sleep'][2000:2900] = df['Sleep'][2000:2900].apply(lambda x: _sleepRandom(x))
df['Sleep'][2900:3000] = df['Sleep'][2900:3000].apply(lambda x: _sleepRandom(x))
df['HeadGaze'][2000:2900] = df['HeadGaze'][2000:2900].apply(lambda x: _headNotFront(x))
df['HeadGaze'][2900:3000] = df['HeadGaze'][2900:3000].apply(lambda x: _headRandom(x))
df['Engagement'][2000:2900] = df['Engagement'][2000:2900].apply(lambda x: _engagedLow(x))
df['Engagement'][2900:3000] = df['Engagement'][2900:3000].apply(lambda x: _engagedNoise(x))
df.shape
df = df[df['Happy'] <= 0.80]
df = df[df['Angry'] <= 0.80]
df = df[df['Disgusted'] <= 0.80]
df.to_csv('data.csv', index=False, header=False )
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
sns.distplot(df['Happy']);
sns.distplot(df['Angry']);
sns.distplot(df['Disgusted']);
def check(x):
if x[0] + x[1] + x[2] + x[3] + x[4] == 1.0:
return True
else:
return False
df['status'] = df.apply(check, axis=1)
df = df[df['status'] == True]
df = df[categories]
###Output
_____no_output_____ |
colabs/audience_analysis.ipynb | ###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Audience Analysis ParametersThe Audience Wizard Dashboard helps you to track the audience performance across all audiences on Display. 1. Wait for BigQuery->UNDEFINED->UNDEFINED->DV360_Audience_Analysis to be created. 1. Join the StarThinker Assets Group to access the following assets 1. Copy Sample DV360 Audience Analysis Dataset. 1. Click Edit Connection, and change to BigQuery->UNDEFINED->UNDEFINED->DV360_Audience_Analysis. 1. Copy Sample DV360 Audience Analysis Report. 1. When prompted choose the new data source you just created. 1. Or give these intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'recipe_slug': '', # Place where tables will be created in BigQuery.
'recipe_timezone': 'America/Los_Angeles', # Timezone for report dates.
'recipe_name': '', # Name of report in DV360, should be unique.
'partners': [], # DV360 partner id.
'advertisers': [], # Comma delimited list of DV360 advertiser ids.
'recipe_project': '', # Google Cloud Project Id.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 Audience AnalysisThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'hour': [
1
],
'auth': 'user',
'description': 'Create a dataset for bigquery tables.',
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_GENERAL',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST',
'FILTER_PARTNER_CURRENCY'
],
'metrics': [
'METRIC_IMPRESSIONS',
'METRIC_CLICKS',
'METRIC_TOTAL_CONVERSIONS',
'METRIC_LAST_CLICKS',
'METRIC_LAST_IMPRESSIONS',
'METRIC_TOTAL_MEDIA_COST_PARTNER'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Audience_Performance',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'partner_currency',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'clicks',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'total_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_click_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_view_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'total_media_cost_partner_currency',
'type': 'FLOAT'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_FIRST_PARTY_NAME',
'FILTER_USER_LIST_FIRST_PARTY',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_First_Party_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Google_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_THIRD_PARTY_NAME',
'FILTER_USER_LIST_THIRD_PARTY',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Third_Party_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'bigquery': {
'hour': [
6
],
'auth': 'user',
'from': {
'query': " SELECT p.advertiser_id, p.advertiser, p.audience_list_id, IF (p.audience_list_type = 'Bid Manager Audiences', 'Google', p.audience_list_type) AS audience_list_type, CASE WHEN REGEXP_CONTAINS(p.audience_list, 'Affinity') THEN 'Affinity' WHEN REGEXP_CONTAINS(p.audience_list, 'Demographics') THEN 'Demographics' WHEN REGEXP_CONTAINS(p.audience_list, 'In-Market') THEN 'In-Market' WHEN REGEXP_CONTAINS(p.audience_list, 'Similar') THEN 'Similar' ELSE 'Custom' END AS audience_list_category, p.audience_list, IF(p.audience_list_cost_usd = 'Unknown', 0.0, CAST(p.audience_list_cost_usd AS FLOAT64)) AS audience_list_cost, p.total_media_cost_partner_currency AS total_media_cost, p.impressions, p.clicks, p.total_conversions, COALESCE(ggl.potential_impressions, fst.potential_impressions, trd.potential_impressions) AS potential_impressions, COALESCE(ggl.unique_cookies_with_impressions, fst.unique_cookies_with_impressions, trd.unique_cookies_with_impressions) AS potential_reach FROM `[PARAMETER].[PARAMETER].DV360_Audience_Performance` p LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Google_Audience` ggl USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_First_Party_Audience` fst USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Third_Party_Audience` trd USING (advertiser_id, audience_list_id) ",
'parameters': [
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
],
'legacy': False
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'view': 'DV360_Audience_Analysis'
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CLIENT CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Audience Analysis ParametersThe Audience Wizard Dashboard helps you to track the audience performance across all audiences on Display. 1. Wait for BigQuery->->->DV360_Audience_Analysis to be created. 1. Join the StarThinker Assets Group to access the following assets 1. Copy Sample DV360 Audience Analysis Dataset. 1. Click Edit Connection, and change to BigQuery->->->DV360_Audience_Analysis. 1. Copy Sample DV360 Audience Analysis Report. 1. When prompted choose the new data source you just created. 1. Or give these intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'recipe_slug': '', # Place where tables will be created in BigQuery.
'recipe_timezone': 'America/Los_Angeles', # Timezone for report dates.
'recipe_name': '', # Name of report in DV360, should be unique.
'partners': [], # DV360 partner id.
'advertisers': [], # Comma delimited list of DV360 advertiser ids.
'recipe_project': '', # Google Cloud Project Id.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 Audience AnalysisThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'hour': [
1
],
'auth': 'user',
'description': 'Create a dataset for bigquery tables.',
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_GENERAL',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST',
'FILTER_PARTNER_CURRENCY'
],
'metrics': [
'METRIC_IMPRESSIONS',
'METRIC_CLICKS',
'METRIC_TOTAL_CONVERSIONS',
'METRIC_LAST_CLICKS',
'METRIC_LAST_IMPRESSIONS',
'METRIC_TOTAL_MEDIA_COST_PARTNER'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Audience_Performance',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'partner_currency',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'clicks',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'total_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_click_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_view_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'total_media_cost_partner_currency',
'type': 'FLOAT'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_FIRST_PARTY_NAME',
'FILTER_USER_LIST_FIRST_PARTY',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_First_Party_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Google_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_THIRD_PARTY_NAME',
'FILTER_USER_LIST_THIRD_PARTY',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Third_Party_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'bigquery': {
'hour': [
6
],
'auth': 'user',
'from': {
'query': " SELECT p.advertiser_id, p.advertiser, p.audience_list_id, IF (p.audience_list_type = 'Bid Manager Audiences', 'Google', p.audience_list_type) AS audience_list_type, CASE WHEN REGEXP_CONTAINS(p.audience_list, 'Affinity') THEN 'Affinity' WHEN REGEXP_CONTAINS(p.audience_list, 'Demographics') THEN 'Demographics' WHEN REGEXP_CONTAINS(p.audience_list, 'In-Market') THEN 'In-Market' WHEN REGEXP_CONTAINS(p.audience_list, 'Similar') THEN 'Similar' ELSE 'Custom' END AS audience_list_category, p.audience_list, IF(p.audience_list_cost_usd = 'Unknown', 0.0, CAST(p.audience_list_cost_usd AS FLOAT64)) AS audience_list_cost, p.total_media_cost_partner_currency AS total_media_cost, p.impressions, p.clicks, p.total_conversions, COALESCE(ggl.potential_impressions, fst.potential_impressions, trd.potential_impressions) AS potential_impressions, COALESCE(ggl.unique_cookies_with_impressions, fst.unique_cookies_with_impressions, trd.unique_cookies_with_impressions) AS potential_reach FROM `[PARAMETER].[PARAMETER].DV360_Audience_Performance` p LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Google_Audience` ggl USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_First_Party_Audience` fst USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Third_Party_Audience` trd USING (advertiser_id, audience_list_id) ",
'parameters': [
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
],
'legacy': False
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'view': 'DV360_Audience_Analysis'
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Audience Analysis ParametersThe Audience Wizard Dashboard helps you to track the audience performance across all audiences on Display. 1. Wait for BigQuery->UNDEFINED->UNDEFINED->DV360_Audience_Analysis to be created. 1. Join the StarThinker Assets Group to access the following assets 1. Copy Sample DV360 Audience Analysis Dataset. 1. Click Edit Connection, and change to BigQuery->UNDEFINED->UNDEFINED->DV360_Audience_Analysis. 1. Copy Sample DV360 Audience Analysis Report. 1. When prompted choose the new data source you just created. 1. Or give these intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'recipe_name': '', # Place where tables will be created in BigQuery.
'recipe_timezone': 'America/Los_Angeles', # Timezone for report dates.
'partners': [], # DV360 partner id.
'advertisers': [], # Comma delimited list of DV360 advertiser ids.
'recipe_project': '', # Google Cloud Project Id.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 Audience AnalysisThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'hour': [
1
],
'auth': 'user',
'description': 'Create a dataset for bigquery tables.',
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_30_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_GENERAL',
'groupBys': [
'FILTER_ADVERTISER',
'FILTER_USER_LIST'
],
'metrics': [
'METRIC_IMPRESSIONS',
'METRIC_CLICKS',
'METRIC_TOTAL_CONVERSIONS',
'METRIC_LAST_CLICKS',
'METRIC_LAST_IMPRESSIONS',
'METRIC_TOTAL_MEDIA_COST_PARTNER'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Audience_Performance',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_status',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'advertiser_integration_code',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'partner_currency',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'clicks',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'total_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_click_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_view_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'total_media_cost_partner_currency',
'type': 'FLOAT'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_30_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'First Party Audience ','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER',
'FILTER_USER_LIST_FIRST_PARTY'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'First Party Audience ','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_First_Party_Audience',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_status',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'advertiser_integration_code',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_30_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Google Audience ','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER',
'FILTER_USER_LIST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Google Audience ','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Google_Audience',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_status',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'advertiser_integration_code',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_30_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Third Party Audience ','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER',
'FILTER_USER_LIST_THIRD_PARTY'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Third Party Audience ','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Third_Party_Audience',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_status',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'advertiser_integration_code',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'bigquery': {
'hour': [
6
],
'auth': 'user',
'from': {
'query': " SELECT p.advertiser_id, p.advertiser, p.audience_list_id, IF (p.audience_list_type = 'Bid Manager Audiences', 'Google', p.audience_list_type) AS audience_list_type, CASE WHEN REGEXP_CONTAINS(p.audience_list, 'Affinity') THEN 'Affinity' WHEN REGEXP_CONTAINS(p.audience_list, 'Demographics') THEN 'Demographics' WHEN REGEXP_CONTAINS(p.audience_list, 'In-Market') THEN 'In-Market' WHEN REGEXP_CONTAINS(p.audience_list, 'Similar') THEN 'Similar' ELSE 'Custom' END AS audience_list_category, p.audience_list, IF(p.audience_list_cost_usd = 'Unknown', 0.0, CAST(p.audience_list_cost_usd AS FLOAT64)) AS audience_list_cost, p.total_media_cost_partner_currency AS total_media_cost, p.impressions, p.clicks, p.total_conversions, COALESCE(ggl.potential_impressions, fst.potential_impressions, trd.potential_impressions) AS potential_impressions, COALESCE(ggl.unique_cookies_with_impressions, fst.unique_cookies_with_impressions, trd.unique_cookies_with_impressions) AS potential_reach FROM `[PARAMETER].[PARAMETER].DV360_Audience_Performance` p LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Google_Audience` ggl USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_First_Party_Audience` fst USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Third_Party_Audience` trd USING (advertiser_id, audience_list_id) ",
'parameters': [
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
],
'legacy': False
},
'to': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'view': 'DV360_Audience_Analysis'
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Audience Analysis ParametersThe Audience Wizard Dashboard helps you to track the audience performance across all audiences on Display. 1. Wait for BigQuery->->->DV360_Audience_Analysis to be created. 1. Join the StarThinker Assets Group to access the following assets 1. Copy Sample DV360 Audience Analysis Dataset. 1. Click Edit Connection, and change to BigQuery->->->DV360_Audience_Analysis. 1. Copy Sample DV360 Audience Analysis Report. 1. When prompted choose the new data source you just created. 1. Or give these intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'recipe_slug': '', # Place where tables will be created in BigQuery.
'recipe_timezone': 'America/Los_Angeles', # Timezone for report dates.
'recipe_name': '', # Name of report in DV360, should be unique.
'partners': [], # DV360 partner id.
'advertisers': [], # Comma delimited list of DV360 advertiser ids.
'recipe_project': '', # Google Cloud Project Id.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 Audience AnalysisThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import commandline_parser
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'hour': [
1
],
'auth': 'user',
'description': 'Create a dataset for bigquery tables.',
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_GENERAL',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST',
'FILTER_PARTNER_CURRENCY'
],
'metrics': [
'METRIC_IMPRESSIONS',
'METRIC_CLICKS',
'METRIC_TOTAL_CONVERSIONS',
'METRIC_LAST_CLICKS',
'METRIC_LAST_IMPRESSIONS',
'METRIC_TOTAL_MEDIA_COST_PARTNER'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Audience_Performance',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'partner_currency',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'clicks',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'total_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_click_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_view_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'total_media_cost_partner_currency',
'type': 'FLOAT'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_FIRST_PARTY_NAME',
'FILTER_USER_LIST_FIRST_PARTY',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_First_Party_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Google_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_THIRD_PARTY_NAME',
'FILTER_USER_LIST_THIRD_PARTY',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Third_Party_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'bigquery': {
'hour': [
6
],
'auth': 'user',
'from': {
'query': " SELECT p.advertiser_id, p.advertiser, p.audience_list_id, IF (p.audience_list_type = 'Bid Manager Audiences', 'Google', p.audience_list_type) AS audience_list_type, CASE WHEN REGEXP_CONTAINS(p.audience_list, 'Affinity') THEN 'Affinity' WHEN REGEXP_CONTAINS(p.audience_list, 'Demographics') THEN 'Demographics' WHEN REGEXP_CONTAINS(p.audience_list, 'In-Market') THEN 'In-Market' WHEN REGEXP_CONTAINS(p.audience_list, 'Similar') THEN 'Similar' ELSE 'Custom' END AS audience_list_category, p.audience_list, IF(p.audience_list_cost_usd = 'Unknown', 0.0, CAST(p.audience_list_cost_usd AS FLOAT64)) AS audience_list_cost, p.total_media_cost_partner_currency AS total_media_cost, p.impressions, p.clicks, p.total_conversions, COALESCE(ggl.potential_impressions, fst.potential_impressions, trd.potential_impressions) AS potential_impressions, COALESCE(ggl.unique_cookies_with_impressions, fst.unique_cookies_with_impressions, trd.unique_cookies_with_impressions) AS potential_reach FROM `[PARAMETER].[PARAMETER].DV360_Audience_Performance` p LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Google_Audience` ggl USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_First_Party_Audience` fst USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Third_Party_Audience` trd USING (advertiser_id, audience_list_id) ",
'parameters': [
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
],
'legacy': False
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'view': 'DV360_Audience_Analysis'
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Audience Analysis ParametersThe Audience Wizard Dashboard helps you to track the audience performance across all audiences on Display. 1. Wait for BigQuery->UNDEFINED->UNDEFINED->DV360_Audience_Analysis to be created. 1. Join the StarThinker Assets Group to access the following assets 1. Copy Sample DV360 Audience Analysis Dataset. 1. Click Edit Connection, and change to BigQuery->UNDEFINED->UNDEFINED->DV360_Audience_Analysis. 1. Copy Sample DV360 Audience Analysis Report. 1. When prompted choose the new data source you just created. 1. Or give these intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'recipe_name': '', # Place where tables will be created in BigQuery.
'recipe_timezone': 'America/Los_Angeles', # Timezone for report dates.
'partners': [], # DV360 partner id.
'advertisers': [], # Comma delimited list of DV360 advertiser ids.
'recipe_project': '', # Google Cloud Project Id.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 Audience AnalysisThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields, json_expand_includes
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'hour': [
1
],
'auth': 'user',
'description': 'Create a dataset for bigquery tables.',
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_GENERAL',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST',
'FILTER_PARTNER_CURRENCY'
],
'metrics': [
'METRIC_IMPRESSIONS',
'METRIC_CLICKS',
'METRIC_TOTAL_CONVERSIONS',
'METRIC_LAST_CLICKS',
'METRIC_LAST_IMPRESSIONS',
'METRIC_TOTAL_MEDIA_COST_PARTNER'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Audience_Performance',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'partner_currency',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'clicks',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'total_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_click_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_view_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'total_media_cost_partner_currency',
'type': 'FLOAT'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_FIRST_PARTY_NAME',
'FILTER_USER_LIST_FIRST_PARTY',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_First_Party_Audience',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Google_Audience',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_THIRD_PARTY_NAME',
'FILTER_USER_LIST_THIRD_PARTY',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Third_Party_Audience',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'bigquery': {
'hour': [
6
],
'auth': 'user',
'from': {
'query': " SELECT p.advertiser_id, p.advertiser, p.audience_list_id, IF (p.audience_list_type = 'Bid Manager Audiences', 'Google', p.audience_list_type) AS audience_list_type, CASE WHEN REGEXP_CONTAINS(p.audience_list, 'Affinity') THEN 'Affinity' WHEN REGEXP_CONTAINS(p.audience_list, 'Demographics') THEN 'Demographics' WHEN REGEXP_CONTAINS(p.audience_list, 'In-Market') THEN 'In-Market' WHEN REGEXP_CONTAINS(p.audience_list, 'Similar') THEN 'Similar' ELSE 'Custom' END AS audience_list_category, p.audience_list, IF(p.audience_list_cost_usd = 'Unknown', 0.0, CAST(p.audience_list_cost_usd AS FLOAT64)) AS audience_list_cost, p.total_media_cost_partner_currency AS total_media_cost, p.impressions, p.clicks, p.total_conversions, COALESCE(ggl.potential_impressions, fst.potential_impressions, trd.potential_impressions) AS potential_impressions, COALESCE(ggl.unique_cookies_with_impressions, fst.unique_cookies_with_impressions, trd.unique_cookies_with_impressions) AS potential_reach FROM `[PARAMETER].[PARAMETER].DV360_Audience_Performance` p LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Google_Audience` ggl USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_First_Party_Audience` fst USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Third_Party_Audience` trd USING (advertiser_id, audience_list_id) ",
'parameters': [
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
],
'legacy': False
},
'to': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'view': 'DV360_Audience_Analysis'
}
}
}
]
json_set_fields(TASKS, FIELDS)
json_expand_includes(TASKS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Audience Analysis ParametersThe Audience Wizard Dashboard helps you to track the audience performance across all audiences on Display. 1. Wait for BigQuery->->->DV360_Audience_Analysis to be created. 1. Join the StarThinker Assets Group to access the following assets 1. Copy Sample DV360 Audience Analysis Dataset. 1. Click Edit Connection, and change to BigQuery->->->DV360_Audience_Analysis. 1. Copy Sample DV360 Audience Analysis Report. 1. When prompted choose the new data source you just created. 1. Or give these intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'recipe_slug': '', # Place where tables will be created in BigQuery.
'recipe_timezone': 'America/Los_Angeles', # Timezone for report dates.
'recipe_name': '', # Name of report in DV360, should be unique.
'partners': [], # DV360 partner id.
'advertisers': [], # Comma delimited list of DV360 advertiser ids.
'recipe_project': '', # Google Cloud Project Id.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 Audience AnalysisThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'hour': [
1
],
'auth': 'user',
'description': 'Create a dataset for bigquery tables.',
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_GENERAL',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST',
'FILTER_PARTNER_CURRENCY'
],
'metrics': [
'METRIC_IMPRESSIONS',
'METRIC_CLICKS',
'METRIC_TOTAL_CONVERSIONS',
'METRIC_LAST_CLICKS',
'METRIC_LAST_IMPRESSIONS',
'METRIC_TOTAL_MEDIA_COST_PARTNER'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Audience_Performance',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'partner_currency',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'clicks',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'total_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_click_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_view_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'total_media_cost_partner_currency',
'type': 'FLOAT'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_FIRST_PARTY_NAME',
'FILTER_USER_LIST_FIRST_PARTY',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_First_Party_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Google_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_THIRD_PARTY_NAME',
'FILTER_USER_LIST_THIRD_PARTY',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Third_Party_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'bigquery': {
'hour': [
6
],
'auth': 'user',
'from': {
'query': " SELECT p.advertiser_id, p.advertiser, p.audience_list_id, IF (p.audience_list_type = 'Bid Manager Audiences', 'Google', p.audience_list_type) AS audience_list_type, CASE WHEN REGEXP_CONTAINS(p.audience_list, 'Affinity') THEN 'Affinity' WHEN REGEXP_CONTAINS(p.audience_list, 'Demographics') THEN 'Demographics' WHEN REGEXP_CONTAINS(p.audience_list, 'In-Market') THEN 'In-Market' WHEN REGEXP_CONTAINS(p.audience_list, 'Similar') THEN 'Similar' ELSE 'Custom' END AS audience_list_category, p.audience_list, IF(p.audience_list_cost_usd = 'Unknown', 0.0, CAST(p.audience_list_cost_usd AS FLOAT64)) AS audience_list_cost, p.total_media_cost_partner_currency AS total_media_cost, p.impressions, p.clicks, p.total_conversions, COALESCE(ggl.potential_impressions, fst.potential_impressions, trd.potential_impressions) AS potential_impressions, COALESCE(ggl.unique_cookies_with_impressions, fst.unique_cookies_with_impressions, trd.unique_cookies_with_impressions) AS potential_reach FROM `[PARAMETER].[PARAMETER].DV360_Audience_Performance` p LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Google_Audience` ggl USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_First_Party_Audience` fst USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Third_Party_Audience` trd USING (advertiser_id, audience_list_id) ",
'parameters': [
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
],
'legacy': False
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'view': 'DV360_Audience_Analysis'
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Audience Analysis ParametersThe Audience Wizard Dashboard helps you to track the audience performance across all audiences on Display. 1. Wait for BigQuery->UNDEFINED->UNDEFINED->DV360_Audience_Analysis to be created. 1. Join the StarThinker Assets Group to access the following assets 1. Copy Sample DV360 Audience Analysis Dataset. 1. Click Edit Connection, and change to BigQuery->UNDEFINED->UNDEFINED->DV360_Audience_Analysis. 1. Copy Sample DV360 Audience Analysis Report. 1. When prompted choose the new data source you just created. 1. Or give these intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'recipe_slug': '', # Place where tables will be created in BigQuery.
'recipe_timezone': 'America/Los_Angeles', # Timezone for report dates.
'recipe_name': '', # Name of report in DV360, should be unique.
'partners': [], # DV360 partner id.
'advertisers': [], # Comma delimited list of DV360 advertiser ids.
'recipe_project': '', # Google Cloud Project Id.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 Audience AnalysisThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'hour': [
1
],
'auth': 'user',
'description': 'Create a dataset for bigquery tables.',
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_GENERAL',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST',
'FILTER_PARTNER_CURRENCY'
],
'metrics': [
'METRIC_IMPRESSIONS',
'METRIC_CLICKS',
'METRIC_TOTAL_CONVERSIONS',
'METRIC_LAST_CLICKS',
'METRIC_LAST_IMPRESSIONS',
'METRIC_TOTAL_MEDIA_COST_PARTNER'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Audience_Performance',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'partner_currency',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'clicks',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'total_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_click_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_view_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'total_media_cost_partner_currency',
'type': 'FLOAT'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_FIRST_PARTY_NAME',
'FILTER_USER_LIST_FIRST_PARTY',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_First_Party_Audience',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Google_Audience',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_THIRD_PARTY_NAME',
'FILTER_USER_LIST_THIRD_PARTY',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Third_Party_Audience',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'bigquery': {
'hour': [
6
],
'auth': 'user',
'from': {
'query': " SELECT p.advertiser_id, p.advertiser, p.audience_list_id, IF (p.audience_list_type = 'Bid Manager Audiences', 'Google', p.audience_list_type) AS audience_list_type, CASE WHEN REGEXP_CONTAINS(p.audience_list, 'Affinity') THEN 'Affinity' WHEN REGEXP_CONTAINS(p.audience_list, 'Demographics') THEN 'Demographics' WHEN REGEXP_CONTAINS(p.audience_list, 'In-Market') THEN 'In-Market' WHEN REGEXP_CONTAINS(p.audience_list, 'Similar') THEN 'Similar' ELSE 'Custom' END AS audience_list_category, p.audience_list, IF(p.audience_list_cost_usd = 'Unknown', 0.0, CAST(p.audience_list_cost_usd AS FLOAT64)) AS audience_list_cost, p.total_media_cost_partner_currency AS total_media_cost, p.impressions, p.clicks, p.total_conversions, COALESCE(ggl.potential_impressions, fst.potential_impressions, trd.potential_impressions) AS potential_impressions, COALESCE(ggl.unique_cookies_with_impressions, fst.unique_cookies_with_impressions, trd.unique_cookies_with_impressions) AS potential_reach FROM `[PARAMETER].[PARAMETER].DV360_Audience_Performance` p LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Google_Audience` ggl USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_First_Party_Audience` fst USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Third_Party_Audience` trd USING (advertiser_id, audience_list_id) ",
'parameters': [
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
],
'legacy': False
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'view': 'DV360_Audience_Analysis'
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
DV360 Audience AnalysisThe Audience Wizard Dashboard helps you to track the audience performance across all audiences on Display. LicenseCopyright 2020 Google LLC,Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. DisclaimerThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.This code generated (see starthinker/scripts for possible source): - **Command**: "python starthinker_ui/manage.py colab" - **Command**: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Set ConfigurationThis code is required to initialize the project. Fill in required fields and press play.1. If the recipe uses a Google Cloud Project: - Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md).1. If the recipe has **auth** set to **user**: - If you have user credentials: - Set the configuration **user** value to your user credentials JSON. - If you DO NOT have user credentials: - Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md).1. If the recipe has **auth** set to **service**: - Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md).
###Code
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
###Output
_____no_output_____
###Markdown
3. Enter DV360 Audience Analysis Recipe Parameters 1. Wait for BigQuery->->->DV360_Audience_Analysis to be created. 1. Join the StarThinker Assets Group to access the following assets 1. Copy Sample DV360 Audience Analysis Dataset. 1. Click Edit Connection, and change to BigQuery->->->DV360_Audience_Analysis. 1. Copy Sample DV360 Audience Analysis Report. 1. When prompted choose the new data source you just created. 1. Or give these intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'recipe_slug': '', # Place where tables will be created in BigQuery.
'recipe_timezone': 'America/Los_Angeles', # Timezone for report dates.
'recipe_name': '', # Name of report in DV360, should be unique.
'partners': [], # DV360 partner id.
'advertisers': [], # Comma delimited list of DV360 advertiser ids.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
4. Execute DV360 Audience AnalysisThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dataset': {
'hour': [
1
],
'auth': 'user',
'description': 'Create a dataset for bigquery tables.',
'dataset': {'field': {'name': 'recipe_slug', 'kind': 'string', 'description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners', 'kind': 'integer_list', 'order': 5, 'default': [], 'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers', 'kind': 'integer_list', 'order': 6, 'default': [], 'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone', 'kind': 'timezone', 'description': 'Timezone for report dates.', 'default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name', 'kind': 'string', 'prefix': 'Audience Analysis Performance ', 'description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_GENERAL',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST',
'FILTER_PARTNER_CURRENCY'
],
'metrics': [
'METRIC_IMPRESSIONS',
'METRIC_CLICKS',
'METRIC_TOTAL_CONVERSIONS',
'METRIC_LAST_CLICKS',
'METRIC_LAST_IMPRESSIONS',
'METRIC_TOTAL_MEDIA_COST_PARTNER'
]
}
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners', 'kind': 'integer_list', 'order': 5, 'default': [], 'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers', 'kind': 'integer_list', 'order': 6, 'default': [], 'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone', 'kind': 'timezone', 'description': 'Timezone for report dates.', 'default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name', 'kind': 'string', 'prefix': 'Audience Analysis First Party', 'description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_FIRST_PARTY_NAME',
'FILTER_USER_LIST_FIRST_PARTY',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners', 'kind': 'integer_list', 'order': 5, 'default': [], 'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers', 'kind': 'integer_list', 'order': 6, 'default': [], 'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone', 'kind': 'timezone', 'description': 'Timezone for report dates.', 'default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name', 'kind': 'string', 'prefix': 'Audience Analysis Google', 'description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners', 'kind': 'integer_list', 'order': 5, 'default': [], 'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers', 'kind': 'integer_list', 'order': 6, 'default': [], 'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone', 'kind': 'timezone', 'description': 'Timezone for report dates.', 'default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name', 'kind': 'string', 'prefix': 'Audience Analysis Third Party', 'description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_THIRD_PARTY_NAME',
'FILTER_USER_LIST_THIRD_PARTY',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name', 'kind': 'string', 'prefix': 'Audience Analysis Performance ', 'description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug', 'kind': 'string', 'description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Audience_Performance',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'partner_currency',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'clicks',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'total_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_click_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_view_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'total_media_cost_partner_currency',
'type': 'FLOAT'
}
]
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name', 'kind': 'string', 'prefix': 'Audience Analysis First Party', 'description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug', 'kind': 'string', 'description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_First_Party_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name', 'kind': 'string', 'prefix': 'Audience Analysis Google', 'description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug', 'kind': 'string', 'description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Google_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name', 'kind': 'string', 'prefix': 'Audience Analysis Third Party', 'description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug', 'kind': 'string', 'description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Third_Party_Audience',
'header': True,
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'bigquery': {
'hour': [
6
],
'auth': 'user',
'from': {
'query': " SELECT p.advertiser_id, p.advertiser, p.audience_list_id, IF (p.audience_list_type = 'Bid Manager Audiences', 'Google', p.audience_list_type) AS audience_list_type, CASE WHEN REGEXP_CONTAINS(p.audience_list, 'Affinity') THEN 'Affinity' WHEN REGEXP_CONTAINS(p.audience_list, 'Demographics') THEN 'Demographics' WHEN REGEXP_CONTAINS(p.audience_list, 'In-Market') THEN 'In-Market' WHEN REGEXP_CONTAINS(p.audience_list, 'Similar') THEN 'Similar' ELSE 'Custom' END AS audience_list_category, p.audience_list, IF(p.audience_list_cost_usd = 'Unknown', 0.0, CAST(p.audience_list_cost_usd AS FLOAT64)) AS audience_list_cost, p.total_media_cost_partner_currency AS total_media_cost, p.impressions, p.clicks, p.total_conversions, COALESCE(ggl.potential_impressions, fst.potential_impressions, trd.potential_impressions) AS potential_impressions, COALESCE(ggl.unique_cookies_with_impressions, fst.unique_cookies_with_impressions, trd.unique_cookies_with_impressions) AS potential_reach FROM `{dataset}.DV360_Audience_Performance` p LEFT JOIN `{dataset}.DV360_Google_Audience` ggl USING (advertiser_id, audience_list_id) LEFT JOIN `{dataset}.DV360_First_Party_Audience` fst USING (advertiser_id, audience_list_id) LEFT JOIN `{dataset}.DV360_Third_Party_Audience` trd USING (advertiser_id, audience_list_id) ",
'parameters': {
'dataset': {'field': {'name': 'recipe_slug', 'kind': 'string', 'description': 'Place where tables will be created in BigQuery.'}}
},
'legacy': False
},
'to': {
'dataset': {'field': {'name': 'recipe_slug', 'kind': 'string', 'description': 'Place where tables will be created in BigQuery.'}},
'view': 'DV360_Audience_Analysis'
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Audience Analysis ParametersThe Audience Wizard Dashboard helps you to track the audience performance across all audiences on Display. 1. Wait for BigQuery->UNDEFINED->UNDEFINED->DV360_Audience_Analysis to be created. 1. Join the StarThinker Assets Group to access the following assets 1. Copy Sample DV360 Audience Analysis Dataset. 1. Click Edit Connection, and change to BigQuery->UNDEFINED->UNDEFINED->DV360_Audience_Analysis. 1. Copy Sample DV360 Audience Analysis Report. 1. When prompted choose the new data source you just created. 1. Or give these intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'recipe_name': '', # Place where tables will be created in BigQuery.
'recipe_timezone': 'America/Los_Angeles', # Timezone for report dates.
'partners': [], # DV360 partner id.
'advertisers': [], # Comma delimited list of DV360 advertiser ids.
'recipe_project': '', # Google Cloud Project Id.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 Audience AnalysisThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'hour': [
1
],
'auth': 'user',
'description': 'Create a dataset for bigquery tables.',
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_GENERAL',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST',
'FILTER_PARTNER_CURRENCY'
],
'metrics': [
'METRIC_IMPRESSIONS',
'METRIC_CLICKS',
'METRIC_TOTAL_CONVERSIONS',
'METRIC_LAST_CLICKS',
'METRIC_LAST_IMPRESSIONS',
'METRIC_TOTAL_MEDIA_COST_PARTNER'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Audience_Performance',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'partner_currency',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'clicks',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'total_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_click_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'post_view_conversions',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'total_media_cost_partner_currency',
'type': 'FLOAT'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_FIRST_PARTY_NAME',
'FILTER_USER_LIST_FIRST_PARTY',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_First_Party_Audience',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Google_Audience',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'dbm': {
'hour': [
2
],
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_THIRD_PARTY_NAME',
'FILTER_USER_LIST_THIRD_PARTY',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
}
}
}
}
},
{
'dbm': {
'hour': [
6
],
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.'}}
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'table': 'DV360_Third_Party_Audience',
'schema': [
{
'mode': 'REQUIRED',
'name': 'advertiser',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'advertiser_id',
'type': 'INT64'
},
{
'mode': 'REQUIRED',
'name': 'audience_list',
'type': 'STRING'
},
{
'mode': 'REQUIRED',
'name': 'audience_list_id',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_type',
'type': 'STRING'
},
{
'mode': 'NULLABLE',
'name': 'audience_list_cost_usd',
'type': 'FLOAT'
},
{
'mode': 'NULLABLE',
'name': 'potential_impressions',
'type': 'INT64'
},
{
'mode': 'NULLABLE',
'name': 'unique_cookies_with_impressions',
'type': 'INT64'
}
]
}
}
}
},
{
'bigquery': {
'hour': [
6
],
'auth': 'user',
'from': {
'query': " SELECT p.advertiser_id, p.advertiser, p.audience_list_id, IF (p.audience_list_type = 'Bid Manager Audiences', 'Google', p.audience_list_type) AS audience_list_type, CASE WHEN REGEXP_CONTAINS(p.audience_list, 'Affinity') THEN 'Affinity' WHEN REGEXP_CONTAINS(p.audience_list, 'Demographics') THEN 'Demographics' WHEN REGEXP_CONTAINS(p.audience_list, 'In-Market') THEN 'In-Market' WHEN REGEXP_CONTAINS(p.audience_list, 'Similar') THEN 'Similar' ELSE 'Custom' END AS audience_list_category, p.audience_list, IF(p.audience_list_cost_usd = 'Unknown', 0.0, CAST(p.audience_list_cost_usd AS FLOAT64)) AS audience_list_cost, p.total_media_cost_partner_currency AS total_media_cost, p.impressions, p.clicks, p.total_conversions, COALESCE(ggl.potential_impressions, fst.potential_impressions, trd.potential_impressions) AS potential_impressions, COALESCE(ggl.unique_cookies_with_impressions, fst.unique_cookies_with_impressions, trd.unique_cookies_with_impressions) AS potential_reach FROM `[PARAMETER].[PARAMETER].DV360_Audience_Performance` p LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Google_Audience` ggl USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_First_Party_Audience` fst USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Third_Party_Audience` trd USING (advertiser_id, audience_list_id) ",
'parameters': [
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'name': 'recipe_project','kind': 'string','order': 6,'default': '','description': 'Google Cloud Project Id.'}},
{'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
],
'legacy': False
},
'to': {
'dataset': {'field': {'name': 'recipe_name','kind': 'string','description': 'Place where tables will be created in BigQuery.'}},
'view': 'DV360_Audience_Analysis'
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Audience Analysis ParametersThe Audience Wizard Dashboard helps you to track the audience performance across all audiences on Display. 1. Wait for BigQuery->UNDEFINED->UNDEFINED->DV360_Audience_Analysis to be created. 1. Join the StarThinker Assets Group to access the following assets 1. Copy Sample DV360 Audience Analysis Dataset. 1. Click Edit Connection, and change to BigQuery->UNDEFINED->UNDEFINED->DV360_Audience_Analysis. 1. Copy Sample DV360 Audience Analysis Report. 1. When prompted choose the new data source you just created. 1. Or give these intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'recipe_timezone': 'America/Los_Angeles', # Timezone for report dates.
'recipe_name': '', # Name of report in DV360, should be unique.
'recipe_slug': '', # Place where tables will be created in BigQuery.
'partners': [], # DV360 partner id.
'recipe_project': '', # Google Cloud Project Id.
'advertisers': [], # Comma delimited list of DV360 advertiser ids.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 Audience AnalysisThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'auth': 'user',
'description': 'Create a dataset for bigquery tables.',
'hour': [
1
],
'dataset': {'field': {'kind': 'string','name': 'recipe_slug','description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'dbm': {
'auth': 'user',
'report': {
'body': {
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.','name': 'recipe_name','kind': 'string'}}
},
'params': {
'type': 'TYPE_GENERAL',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST',
'FILTER_PARTNER_CURRENCY'
],
'metrics': [
'METRIC_IMPRESSIONS',
'METRIC_CLICKS',
'METRIC_TOTAL_CONVERSIONS',
'METRIC_LAST_CLICKS',
'METRIC_LAST_IMPRESSIONS',
'METRIC_TOTAL_MEDIA_COST_PARTNER'
]
},
'timezoneCode': {'field': {'description': 'Timezone for report dates.','kind': 'timezone','name': 'recipe_timezone','default': 'America/Los_Angeles'}}
},
'filters': {
'FILTER_ADVERTISER': {
'values': {'field': {'description': 'Comma delimited list of DV360 advertiser ids.','name': 'advertisers','order': 6,'default': [],'kind': 'integer_list'}}
},
'FILTER_PARTNER': {
'values': {'field': {'description': 'DV360 partner id.','name': 'partners','order': 5,'default': [],'kind': 'integer_list'}}
}
}
},
'hour': [
2
]
}
},
{
'dbm': {
'auth': 'user',
'report': {
'name': {'field': {'prefix': 'Audience Analysis Performance ','description': 'Name of report in DV360, should be unique.','name': 'recipe_name','kind': 'string'}}
},
'out': {
'bigquery': {
'table': 'DV360_Audience_Performance',
'dataset': {'field': {'kind': 'string','name': 'recipe_slug','description': 'Place where tables will be created in BigQuery.'}},
'schema': [
{
'type': 'STRING',
'name': 'advertiser',
'mode': 'REQUIRED'
},
{
'type': 'INT64',
'name': 'advertiser_id',
'mode': 'REQUIRED'
},
{
'type': 'STRING',
'name': 'audience_list',
'mode': 'REQUIRED'
},
{
'type': 'INT64',
'name': 'audience_list_id',
'mode': 'REQUIRED'
},
{
'type': 'STRING',
'name': 'audience_list_type',
'mode': 'NULLABLE'
},
{
'type': 'STRING',
'name': 'audience_list_cost_usd',
'mode': 'NULLABLE'
},
{
'type': 'STRING',
'name': 'partner_currency',
'mode': 'NULLABLE'
},
{
'type': 'INT64',
'name': 'impressions',
'mode': 'NULLABLE'
},
{
'type': 'INT64',
'name': 'clicks',
'mode': 'NULLABLE'
},
{
'type': 'FLOAT',
'name': 'total_conversions',
'mode': 'NULLABLE'
},
{
'type': 'FLOAT',
'name': 'post_click_conversions',
'mode': 'NULLABLE'
},
{
'type': 'FLOAT',
'name': 'post_view_conversions',
'mode': 'NULLABLE'
},
{
'type': 'FLOAT',
'name': 'total_media_cost_partner_currency',
'mode': 'NULLABLE'
}
]
}
},
'hour': [
6
]
}
},
{
'dbm': {
'auth': 'user',
'report': {
'body': {
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.','name': 'recipe_name','kind': 'string'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_FIRST_PARTY_NAME',
'FILTER_USER_LIST_FIRST_PARTY',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_FIRST_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
},
'timezoneCode': {'field': {'description': 'Timezone for report dates.','kind': 'timezone','name': 'recipe_timezone','default': 'America/Los_Angeles'}}
},
'filters': {
'FILTER_ADVERTISER': {
'values': {'field': {'description': 'Comma delimited list of DV360 advertiser ids.','name': 'advertisers','order': 6,'default': [],'kind': 'integer_list'}}
},
'FILTER_PARTNER': {
'values': {'field': {'description': 'DV360 partner id.','name': 'partners','order': 5,'default': [],'kind': 'integer_list'}}
}
}
},
'hour': [
2
]
}
},
{
'dbm': {
'auth': 'user',
'report': {
'name': {'field': {'prefix': 'Audience Analysis First Party','description': 'Name of report in DV360, should be unique.','name': 'recipe_name','kind': 'string'}}
},
'out': {
'bigquery': {
'table': 'DV360_First_Party_Audience',
'dataset': {'field': {'kind': 'string','name': 'recipe_slug','description': 'Place where tables will be created in BigQuery.'}},
'schema': [
{
'type': 'STRING',
'name': 'advertiser',
'mode': 'REQUIRED'
},
{
'type': 'INT64',
'name': 'advertiser_id',
'mode': 'REQUIRED'
},
{
'type': 'STRING',
'name': 'audience_list',
'mode': 'REQUIRED'
},
{
'type': 'INT64',
'name': 'audience_list_id',
'mode': 'REQUIRED'
},
{
'type': 'STRING',
'name': 'audience_list_type',
'mode': 'NULLABLE'
},
{
'type': 'FLOAT',
'name': 'audience_list_cost_usd',
'mode': 'NULLABLE'
},
{
'type': 'INT64',
'name': 'potential_impressions',
'mode': 'NULLABLE'
},
{
'type': 'INT64',
'name': 'unique_cookies_with_impressions',
'mode': 'NULLABLE'
}
]
}
},
'hour': [
6
]
}
},
{
'dbm': {
'auth': 'user',
'report': {
'body': {
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.','name': 'recipe_name','kind': 'string'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_AUDIENCE_LIST',
'FILTER_USER_LIST',
'FILTER_AUDIENCE_LIST_TYPE',
'FILTER_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
},
'timezoneCode': {'field': {'description': 'Timezone for report dates.','kind': 'timezone','name': 'recipe_timezone','default': 'America/Los_Angeles'}}
},
'filters': {
'FILTER_ADVERTISER': {
'values': {'field': {'description': 'Comma delimited list of DV360 advertiser ids.','name': 'advertisers','order': 6,'default': [],'kind': 'integer_list'}}
},
'FILTER_PARTNER': {
'values': {'field': {'description': 'DV360 partner id.','name': 'partners','order': 5,'default': [],'kind': 'integer_list'}}
}
}
},
'hour': [
2
]
}
},
{
'dbm': {
'auth': 'user',
'report': {
'name': {'field': {'prefix': 'Audience Analysis Google','description': 'Name of report in DV360, should be unique.','name': 'recipe_name','kind': 'string'}}
},
'out': {
'bigquery': {
'table': 'DV360_Google_Audience',
'dataset': {'field': {'kind': 'string','name': 'recipe_slug','description': 'Place where tables will be created in BigQuery.'}},
'schema': [
{
'type': 'STRING',
'name': 'advertiser',
'mode': 'REQUIRED'
},
{
'type': 'INT64',
'name': 'advertiser_id',
'mode': 'REQUIRED'
},
{
'type': 'STRING',
'name': 'audience_list',
'mode': 'REQUIRED'
},
{
'type': 'INT64',
'name': 'audience_list_id',
'mode': 'REQUIRED'
},
{
'type': 'STRING',
'name': 'audience_list_type',
'mode': 'NULLABLE'
},
{
'type': 'FLOAT',
'name': 'audience_list_cost_usd',
'mode': 'NULLABLE'
},
{
'type': 'INT64',
'name': 'potential_impressions',
'mode': 'NULLABLE'
},
{
'type': 'INT64',
'name': 'unique_cookies_with_impressions',
'mode': 'NULLABLE'
}
]
}
},
'hour': [
6
]
}
},
{
'dbm': {
'auth': 'user',
'report': {
'body': {
'metadata': {
'dataRange': 'LAST_7_DAYS',
'format': 'CSV',
'title': {'field': {'prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.','name': 'recipe_name','kind': 'string'}}
},
'params': {
'type': 'TYPE_INVENTORY_AVAILABILITY',
'groupBys': [
'FILTER_ADVERTISER_NAME',
'FILTER_ADVERTISER',
'FILTER_USER_LIST_THIRD_PARTY_NAME',
'FILTER_USER_LIST_THIRD_PARTY',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_TYPE',
'FILTER_THIRD_PARTY_AUDIENCE_LIST_COST'
],
'metrics': [
'METRIC_BID_REQUESTS',
'METRIC_UNIQUE_VISITORS_COOKIES'
]
},
'timezoneCode': {'field': {'description': 'Timezone for report dates.','kind': 'timezone','name': 'recipe_timezone','default': 'America/Los_Angeles'}}
},
'filters': {
'FILTER_ADVERTISER': {
'values': {'field': {'description': 'Comma delimited list of DV360 advertiser ids.','name': 'advertisers','order': 6,'default': [],'kind': 'integer_list'}}
},
'FILTER_PARTNER': {
'values': {'field': {'description': 'DV360 partner id.','name': 'partners','order': 5,'default': [],'kind': 'integer_list'}}
}
}
},
'hour': [
2
]
}
},
{
'dbm': {
'auth': 'user',
'report': {
'name': {'field': {'prefix': 'Audience Analysis Third Party','description': 'Name of report in DV360, should be unique.','name': 'recipe_name','kind': 'string'}}
},
'out': {
'bigquery': {
'table': 'DV360_Third_Party_Audience',
'dataset': {'field': {'kind': 'string','name': 'recipe_slug','description': 'Place where tables will be created in BigQuery.'}},
'schema': [
{
'type': 'STRING',
'name': 'advertiser',
'mode': 'REQUIRED'
},
{
'type': 'INT64',
'name': 'advertiser_id',
'mode': 'REQUIRED'
},
{
'type': 'STRING',
'name': 'audience_list',
'mode': 'REQUIRED'
},
{
'type': 'INT64',
'name': 'audience_list_id',
'mode': 'REQUIRED'
},
{
'type': 'STRING',
'name': 'audience_list_type',
'mode': 'NULLABLE'
},
{
'type': 'FLOAT',
'name': 'audience_list_cost_usd',
'mode': 'NULLABLE'
},
{
'type': 'INT64',
'name': 'potential_impressions',
'mode': 'NULLABLE'
},
{
'type': 'INT64',
'name': 'unique_cookies_with_impressions',
'mode': 'NULLABLE'
}
]
}
},
'hour': [
6
]
}
},
{
'bigquery': {
'auth': 'user',
'to': {
'dataset': {'field': {'kind': 'string','name': 'recipe_slug','description': 'Place where tables will be created in BigQuery.'}},
'view': 'DV360_Audience_Analysis'
},
'hour': [
6
],
'from': {
'legacy': False,
'query': " SELECT p.advertiser_id, p.advertiser, p.audience_list_id, IF (p.audience_list_type = 'Bid Manager Audiences', 'Google', p.audience_list_type) AS audience_list_type, CASE WHEN REGEXP_CONTAINS(p.audience_list, 'Affinity') THEN 'Affinity' WHEN REGEXP_CONTAINS(p.audience_list, 'Demographics') THEN 'Demographics' WHEN REGEXP_CONTAINS(p.audience_list, 'In-Market') THEN 'In-Market' WHEN REGEXP_CONTAINS(p.audience_list, 'Similar') THEN 'Similar' ELSE 'Custom' END AS audience_list_category, p.audience_list, IF(p.audience_list_cost_usd = 'Unknown', 0.0, CAST(p.audience_list_cost_usd AS FLOAT64)) AS audience_list_cost, p.total_media_cost_partner_currency AS total_media_cost, p.impressions, p.clicks, p.total_conversions, COALESCE(ggl.potential_impressions, fst.potential_impressions, trd.potential_impressions) AS potential_impressions, COALESCE(ggl.unique_cookies_with_impressions, fst.unique_cookies_with_impressions, trd.unique_cookies_with_impressions) AS potential_reach FROM `[PARAMETER].[PARAMETER].DV360_Audience_Performance` p LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Google_Audience` ggl USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_First_Party_Audience` fst USING (advertiser_id, audience_list_id) LEFT JOIN `[PARAMETER].[PARAMETER].DV360_Third_Party_Audience` trd USING (advertiser_id, audience_list_id) ",
'parameters': [
{'field': {'description': 'Google Cloud Project Id.','name': 'recipe_project','order': 6,'default': '','kind': 'string'}},
{'field': {'kind': 'string','name': 'recipe_slug','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'description': 'Google Cloud Project Id.','name': 'recipe_project','order': 6,'default': '','kind': 'string'}},
{'field': {'kind': 'string','name': 'recipe_slug','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'description': 'Google Cloud Project Id.','name': 'recipe_project','order': 6,'default': '','kind': 'string'}},
{'field': {'kind': 'string','name': 'recipe_slug','description': 'Place where tables will be created in BigQuery.'}},
{'field': {'description': 'Google Cloud Project Id.','name': 'recipe_project','order': 6,'default': '','kind': 'string'}},
{'field': {'kind': 'string','name': 'recipe_slug','description': 'Place where tables will be created in BigQuery.'}}
]
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____ |
ml/02-pandas/02-read-excel.ipynb | ###Markdown
做一些Excel表格操作的例子使用的数据为测试数据,不对外提供 把一个复杂的Excel拆分成N个
###Code
import pandas as pd
import numpy as np
fileName = "/Users/mac/Documents/1001.xlsx"
sheet = pd.read_excel(io=fileName)
sheet
columnsIndex = []
for i in range(326):
columnsIndex.append(450302000566+i)
sheet.columns = columnsIndex
sheet[[450302000566,450302000567,450302000568,450302000569,450302000570,450302000571,450302000572,450302000573,450302000574,450302000575,450302000576,450302000577]]
outputName = "/Users/mac/Documents/001.xls"
sheet.columns = columnsIndex
sheet[[450302000566,450302000567,450302000568,450302000569,450302000570,450302000571,450302000572,450302000573,450302000574,450302000575,450302000576,450302000577]].to_excel(outputName,sheet_name="001",index=False)
for i in range(325):
sheet.iloc[:,1+i:11+i]
###Output
_____no_output_____
###Markdown
Excel 表格合并把某个文件夹下的Excel表格合并成为一个
###Code
import pandas as pd
import os
filePath = "/Users/mac/微云同步盘/work/celloud/shenhengling/02 日常统计表格/01 百菌探-收样记录/01 收样记录表/"
df_empty=pd.DataFrame(columns=['序号','接收时间','样本类型','样本个数','样本姓名','发样地址','快递单号','样本接收人'])
# df_empty=pd.DataFrame()
for filename in os.listdir(filePath):
df=pd.read_excel(io = os.path.join(filePath,filename),encoding_override='utf8',header=0,usecols=[0,1,2,3,4,5,6,7])
df_empty=df_empty.append(df,ignore_index=True)
df_empty[['快递单号']] = df_empty[['快递单号']].astype(str)
df_empty
outFile = "/Users/mac/Documents/sample-info.csv"
df_empty.to_csv(outFile,encoding="utf_8_sig")
###Output
_____no_output_____
###Markdown
合并多个excel
###Code
import pandas as pd
import os
# 定义需要合并的所有Excel的目录
workDir = "/Users/mac/Documents/workspaces/gitee/biogeek/spider/decipherJs/"
# 定义一个列表
frames = []
for i in range(1,11):
# 获取当前文件的绝对路径
filePath = os.path.join(workDir,"patient_trio_"+str(i)+".xlsx")
# 使用pd去读excel文件
df = pd.read_excel(filePath)
frames.append(df)
#合并并输出到一个excel中
result = pd.concat(frames)
result.to_excel(workDir+"patient_trio.xlsx",index=False)
###Output
_____no_output_____ |
notebooks/Sofia/join_data_and_fit_fromDataSheet.ipynb | ###Markdown
Fitting Data
###Code
def make_hist(data, parameter, bins, constant) :
fig, ax = plt.subplots(figsize=(15,15))
sns.set_style("ticks")
hist = data.hist(by=[Type,tubulin], column=parameter, bins = bins,density=True,ax=ax)
fig.suptitle(parameter)
sns.despine()
sns.set_context("poster", font_scale=1, rc={"lines.linewidth":3.0})
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
unique = data[DCXconc].unique()
cte = [x for x in unique if x > 0]
return hist, cte[0],fig
def get_hist(ax):
n,bins = [],[]
for rect in ax.patches:
((x0, y0), (x1, y1)) = rect.get_bbox().get_points()
n.append(y1-y0)
bins.append(x0) # left edge of each bin
#bins.append(x1) # also get right edge of last bin
return n,bins
def gaussian(x, mu, sig):
return (np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.))) )/(sig*np.sqrt(2*np.pi))
def exponential(x, scale):
return ((np.exp(-x/scale) )/(scale))
def gamma(x, shape, scale):
return (np.power(x,shape-1)*np.exp(-x/ scale))/(sp.special.gamma(shape) * np.power(scale,shape))
def equation_fit(data, parameter, equation, constant,maxbin,binsize):
bins = np.arange(0, maxbin + binsize, binsize)
hist, cte,fig = make_hist(data, parameter, bins, constant)
results = pd.DataFrame(columns=[] , index=[])
for i in np.arange(0,len(hist[0][:])):
for j in np.arange(0,len(hist[:][0])):
n, bins = get_hist(hist[j][i]);
if n == []:
break
title = []
title = hist[j][i].get_title()
title = title[1:-1]
title = title.split(',')
if equation == gamma :
coeff, var_matrix = sp.optimize.curve_fit(equation,bins,n,[2,1])
else :
coeff, var_matrix = sp.optimize.curve_fit(equation,bins,n)
variance = np.diagonal(var_matrix) #Refer [3]
SE = np.sqrt(variance) #Refer [4]
#======Making a data frame========
results0 = pd.DataFrame(columns=[] , index=[])
for k in np.arange(0,len(coeff)):
header = [np.array([parameter]),np.array(['Coefficient '+ str(k)])]
r0 = pd.DataFrame([coeff[k],SE[k]], index=(['Value','SE']),columns= header)
results0 = pd.concat([results0, r0], axis=1, sort=False)
results0[tubulin] = float(title[1])
if title[0] == 'None':
results0[constant] = 0
else:
results0[constant] = cte
results0[Type] = title[0]
results = pd.concat([results, results0], sort=False)
return results,fig
newmydir = path/('fitdata')
newmydir.mkdir(exist_ok=True)
GrowthRateFit , GrowthRateFig = equation_fit(data,GrowthRate,gaussian,DCXconc,1.5,0.05);
GrowthRateFig.savefig(newmydir/('GrowthRateHist_'+jointdate+'.pdf'))
TimeToNucleateFit , TimeToNucleateFig = equation_fit(data,TimeToNucleate,exponential,DCXconc,45,1);
TimeToNucleateFig.savefig(newmydir/('TimeToNucleateHist_'+jointdate+'.pdf'))#
LifetimeFit , LifetimeFig = equation_fit(data,Lifetime,gamma,DCXconc,45,1);
LifetimeFig.savefig(newmydir/('LifetimeHist_'+jointdate+'.pdf'))
ShrinkageRateFit , ShrinkageRateFig = equation_fit(data,ShrinkageRate,gaussian,DCXconc,30,0.5);
ShrinkageRateFig.savefig(newmydir/('ShrinkageRateHist_'+jointdate+'.pdf'))
ResultFit = pd.concat([GrowthRateFit, TimeToNucleateFit,LifetimeFit,ShrinkageRateFit], axis=1, sort=False)
ResultFit = ResultFit.loc[:,~ResultFit.columns.duplicated()]
ResultFit.to_csv(newmydir/('ResultFit_'+jointdate+'.csv'), encoding='utf-8', index=False)
ResultFit
###Output
_____no_output_____
###Markdown
PLOT DATA
###Code
LifetimeCoeff0 = ResultFit[Lifetime]['Coefficient 0'].loc['Value']
LifetimeCoeff1 = ResultFit[Lifetime]['Coefficient 1'].loc['Value']
LifetimeSE0 = ResultFit[Lifetime]['Coefficient 0'].loc['SE']
LifetimeSE1 = ResultFit[Lifetime]['Coefficient 1'].loc['SE']
LifetimeMean = LifetimeCoeff0*LifetimeCoeff1
LifetimeSE = LifetimeCoeff0*LifetimeSE1 + LifetimeCoeff1*LifetimeSE1
parameters = [GrowthRate,TimeToNucleate,Lifetime,ShrinkageRate]
titles = ('Growth','Nucleation','Lifetime','Correlation')
ylables = ('Growth Rate ' r'$(\mu m / min)$','Time to Nucleate ' r'$(min)$','Lifetime ' r'$(min)$','Time to Nucleate ' r'$ (min)$')
ylim = 26
scattersize = 12
fig, ax = plt.subplots(2,2,figsize=(15,15))
#plt.suptitle('Fitted pooled data', fontsize=30)
ax[0][0].errorbar(tubfitdata[variable]['Value'], tubfitdata['mu']['Value'].values, yerr=tubfitdata['mu']['SE'].values, fmt='o', markersize=scattersize,capsize = 3,color=blue)
ax[0][0].set_ylim(0,1.5)
ax[0][0].tick_params(axis='x', labelcolor= blue)
ax1 = ax[0][0].twiny()
ax1.errorbar(fitdata[variable2]['Value'], fitdata['mu']['Value'].values, yerr=fitdata['mu']['SE'].values, fmt='o', markersize=scattersize,capsize = 3,color=green)
ax1.set_xlim(-3,103)
ax1.set_xlabel(variable2 + r'$(nM)$')
ax1.set_xticks(np.arange(0, 101, 25))
ax1.tick_params(axis='x', labelcolor= green, width = 3.5, length = 7)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_lw(3)
ax[0][1].errorbar(tubfitdata[variable]['Value'], tubfitdata['scale e']['Value'].values, yerr=tubfitdata['scale e']['SE'].values, fmt='o', markersize=scattersize,capsize = 3,color=blue)
ax[0][1].set_ylim(0,ylim)
ax[0][1].tick_params(axis='x', labelcolor= blue)
ax1 = ax[0][1].twiny()
ax1.errorbar(fitdata[variable2]['Value'], fitdata['scale e']['Value'].values, yerr=fitdata['scale e']['SE'].values, fmt='o', markersize=scattersize,capsize = 3,color=green)
ax1.set_xlim(-3,103)
ax1.set_xlabel(variable2 + r'$(nM)$')
ax1.set_xticks(np.arange(0, 101, 25))
ax1.tick_params(axis='x', labelcolor= green, width = 3.5, length = 7)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_lw(3)
ax[1][0].errorbar(tubfitdata[variable]['Value'], tubgamma_mu, yerr=tubgamma_muSE, fmt='o', markersize=scattersize,capsize = 3,color=blue)
ax[1][0].set_ylim(0,18)
ax[1][0].tick_params(axis='x', labelcolor= blue)
ax1 = ax[1][0].twiny()
ax1.errorbar(fitdata[variable2]['Value'], gamma_mu, yerr=gamma_muSE, fmt='o', markersize=scattersize,capsize = 3,color=green)
ax1.set_xlim(-3,103)
ax1.set_xlabel(variable2 + r'$(nM)$')
ax1.tick_params(axis='x', labelcolor= green, width = 3.5, length = 7)
ax1.set_xticks(np.arange(0, 101, 25))
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_lw(3)
ax[1][1].errorbar( tubgamma_mu,tubfitdata['scale e']['Value'].values, xerr=tubgamma_muSE , yerr=tubfitdata['scale e']['SE'].values, fmt='o', markersize=scattersize,capsize = 3,color = blue)
ax[1][1].errorbar( gamma_mu,fitdata['scale e']['Value'].values, xerr=gamma_muSE , yerr=fitdata['scale e']['SE'].values, fmt='o', markersize=scattersize,capsize = 3,color = green)
ax[1][1].set_ylim(0,ylim)
ax[1][1].legend((variable,variable2),loc='upper right',title='Variable')
count = 0
for i in np.arange(len(ax)):
for j in np.arange(len(ax)):
ax[i][j].set_xlabel(variable + r'$(\mu M)$')
ax[i][j].set_ylabel(ylables[count])
ax[i][j].set_xlim(0,17)
ax[i][j].set_xticks(np.arange(0, 17, 2))
ax[i][j].spines['right'].set_visible(False)
ax[i][j].spines['top'].set_visible(False)
ax[i][j].spines['left'].set_lw(3)
ax[i][j].spines['bottom'].set_lw(3)
ax[i][j].tick_params(axis='both', width = 3.5, length = 7)
count += 1
ax[1][1].set_xlabel('Lifetime ' r'$(min)$')
ax[1][1].set_xlim(0,18)
ax[1][1].set_title(titles[3]);
plt.tight_layout()
#plt.savefig(path.parents[0]/('joint_graphsFit_'+jointdate+'.png'))
plt.savefig(path.parents[0]/('joint_graphsFit_corr_'+jointdate+'.pdf'))
###Output
_____no_output_____ |
Data_Engineering/ETL Pipelines/12_dummyvariables_exercise/12_dummyvariables_exercise.ipynb | ###Markdown
Dummy Variables ExerciseIn this exercise, you'll create dummy variables from the projects data set. The idea is to transform categorical data like this:| Project ID | Project Category ||------------|------------------|| 0 | Energy || 1 | Transportation || 2 | Health || 3 | Employment |into new features that look like this:| Project ID | Energy | Transportation | Health | Employment ||------------|--------|----------------|--------|------------|| 0 | 1 | 0 | 0 | 0 || 1 | 0 | 1 | 0 | 0 || 2 | 0 | 0 | 1 | 0 || 3 | 0 | 0 | 0 | 1 |(Note if you were going to use this data with a model influenced by multicollinearity, you would want to eliminate one of the columns to avoid redundant information.) The reasoning behind these transformations is that machine learning algorithms read in numbers not text. Text needs to be converted into numbers. You could assign a number to each category like 1, 2, 3, and 4. But a categorical variable has no inherent order.Pandas makes it very easy to create dummy variables with the [get_dummies](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html) method. In this exercise, you'll create dummy variables from the World Bank projects data; however, there's a caveat. The World Bank data is not particularly clean, so you'll need to explore and wrangle the data first.You'll focus on the text values in the sector variables.Run the code cells below to read in the World Bank projects data set and then to filter out the data for text variables.
###Code
import pandas as pd
import numpy as np
# read in the projects data set and do basic wrangling
projects = pd.read_csv('../data/projects_data.csv', dtype=str)
projects.drop('Unnamed: 56', axis=1, inplace=True)
projects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))
projects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]
projects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])
# keep the project name, lending, sector and theme data
sector = projects.copy()
sector = sector[['project_name', 'lendinginstr', 'sector1', 'sector2', 'sector3', 'sector4', 'sector5', 'sector',
'mjsector1', 'mjsector2', 'mjsector3', 'mjsector4', 'mjsector5',
'mjsector', 'theme1', 'theme2', 'theme3', 'theme4', 'theme5', 'theme ',
'goal', 'financier', 'mjtheme1name', 'mjtheme2name', 'mjtheme3name',
'mjtheme4name', 'mjtheme5name']]
###Output
_____no_output_____
###Markdown
Run the code cell below. This cell shows the percentage of each variable that is null. Notice the mjsector1 through mjsector5 variables are all null. The mjtheme1name through mjtheme5name are also all null as well as the theme variable. Because these variables contain so many null values, they're probably not very useful.
###Code
# output percentage of values that are missing
100 * sector.isnull().sum() / sector.shape[0]
###Output
_____no_output_____
###Markdown
The sector1 variable looks promising; it doesn't contain any null values at all. In the next cell, store the unique sector1 values in a list and output the results. Use the sort_values() and unique() methods.
###Code
# TODO: Create a list of the unique values in sector1. Use the sort_values() and unique() pandas methods.
# And then convert those results into a Python list
uniquesectors1 = list(sector['sector1'].sort_values().unique())
uniquesectors1
# run this code cell to see the number of unique values
print('Number of unique values in sector1:', len(uniquesectors1))
###Output
Number of unique values in sector1: 3060
###Markdown
3060 different categories is quite a lot! Remember that with dummy variables, if you have n categorical values, you need n - 1 new variables! That means 3059 extra columns! There are a few issues with this 'sector1' variable. First, there are values labeled '!$!0'. These should be substituted with NaN.Furthermore, each sector1 value ends with a ten or eleven character string like '!$!49!$!EP'. Some sectors show up twice in the list like: 'Other Industry; Trade and Services!$!70!$!YZ', 'Other Industry; Trade and Services!$!63!$!YZ',But it seems like those are actually the same sector. You'll need to remove everything past the exclamation point. Many values in the sector1 variable start with the term '(Historic)'. Try removing that phrase as well.Fix these issues in the code cell below.
###Code
# TODO: In the sector1 variable, replace the string '!$!0' with nan
# Put the results back into the sector1 variable
# HINT: you can use the pandas replace() method and numpy.nan
sector['sector1'] = sector['sector1'].replace('!$!0', np.nan)
# TODO: In the sector1 variable, remove the last 10 or 11 characters from the sector1 variable.
# HINT: There is more than one way to do this. For example,
# you can use the replace method with a regex expression '!.+'
# That regex expression looks for a string with an exclamation
# point followed by one or more characters
sector['sector1'] = sector['sector1'].str.split('!', 1).str[0]
# TODO: Remove the string '(Historic)' from the sector1 variable
# HINT: You can use the replace method
sector['sector1'] = sector['sector1'].replace('(Historic)', "", regex=True) #'^(\(Historic\))'
print('Number of unique sectors after cleaning:', len(list(sector['sector1'].unique())))
print('Percentage of null values after cleaning:', 100 * sector['sector1'].isnull().sum() / sector['sector1'].shape[0])
###Output
Number of unique sectors after cleaning: 156
Percentage of null values after cleaning: 3.4962735642262164
###Markdown
Now there are 156 unique categorical values. That's better than 3060. If you were going to use this data with a supervised learning machine model, you could try converting these 156 values to dummy variables. You'd still have to train and test a model to see if those are good features.But can you do anything else with the sector1 variable?The percentage of null values for 'sector1' is now 3.49%. That turns out to be the same number as the null values for the 'sector' column. You can see this if you scroll back up to where the code calculated the percentage of null values for each variable. Perhaps the 'sector1' and 'sector' variable have the same information. If you look at the 'sector' variable, however, it also needs cleaning. The values look like this:'Urban Transport;Urban Transport;Public Administration - Transportation'It turns out the 'sector' variable combines information from the 'sector1' through 'sector5' variables and the 'mjsector' variable. Run the code cell below to look at the sector variable. What else can you do? If you look at all of the diferent sector1 categories, it might be useful to combine a few of them together. For example, there are various categories with the term "Energy" in them. And then there are other categories that seem related to energy but don't have the word energy in them like "Thermal" and "Hydro". Some categories have the term "Renewable Energy", so perhaps you could make a separate "Renewable Energy" category.Similarly, there are categories with the term "Transportation" in them, and then there are related categories like "Highways".In the next cell, find all sector1 values with the term 'Energy' in them. For each of these rows, put the string 'energy' in a new column called 'sector1_aggregates'. Do the same for "Transportation".
###Code
import re
# Create the sector1_aggregates variable
sector.loc[:,'sector1_aggregates'] = sector['sector1']
# TODO: The code above created a new variable called sector1_aggregates.
# Currently, sector1_aggregates has all of the same values as sector1
# For this task, find all the rows in sector1_aggregates with the term 'Energy' in them,
# For all of these rows, replace whatever is the value is with the term 'Energy'.
# The idea is to simplify the category names by combining various categories together.
# Then, do the same for the term 'Transportation
# HINT: You can use the contains() methods. See the documentation for how to ignore case using the re library
# HINT: You might get an error saying "cannot index with vector containing NA / NaN values."
# Try converting NaN values to something else like False or a string
sector.loc[sector['sector1'].str.contains('Energy', case =False, na= False), 'sector1_aggregates'] = 'Energy'
print('Number of unique sectors after cleaning:', len(list(sector['sector1_aggregates'].unique())))
sector['sector1'].isnull().sum()
###Output
_____no_output_____
###Markdown
The number of unique sectors continues to go down. Keep in mind that how much to consolidate will depend on your machine learning model performance and your hardware's ability to handle the extra features in memory. If your hardware's memory can handle 3060 new features and your machine learning algorithm performs better, then go for it!There are still 638 entries with NaN values. How could you fill these in? You might try to determine an appropriate category from the 'project_name' or 'lendinginstr' variables. If you make dummy variables including NaN values, then you could consider a feature with all zeros to represent NaN. Or you could delete these records from the data set. Pandas will ignore NaN values by default. That means, for a given row, all dummy variables will have a value of 0 if the sector1 value was NaN.Don't forget about the bigger context! This data is being prepared for a machine learning algorithm. Whatever techniques you use to engineer new features, you'll need to use those when running your model on new data. So if your new data does not contain a sector1 value, you'll have to run whatever feature engineering processes you did on your training set.In this final set, use the pandas pd.get_dummies() method to create dummy variables. Then use the concat() method to concatenate the dummy variables to a dataframe that contains the project totalamt variable and the project year from the boardapprovaldate.
###Code
# TODO: Create dummy variables from the sector1_aggregates data. Put the results into a dataframe called dummies
# Hint: Use the get_dummies method
dummies = pd.get_dummies(sector.sector1_aggregates)
# TODO: Create a new dataframe called df by
# filtering the projects data for the totalamt and
# the year from boardapprovaldate
projects['year'] = projects.boardapprovaldate.dt.year
df = projects.copy()
# TODO: Concatenate the results of dummies and projects
# into a single data frame
df_final = pd.concat([df, dummies], axis=1)
df_final.head()
###Output
_____no_output_____
###Markdown
ConclusionPandas makes it relatively easy to create dummy variables; however, oftentimes you'll need to clean the data first.
###Code
df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
'B': ['abc', 'bar', 'xyz']})
df.replace(to_replace=r'^ba.$', value='new', regex=True)
df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
'B': ['abc', 'bar', 'xyz']})
df.replace(to_replace=r'^ba.$', value='new')
###Output
_____no_output_____ |
how-to-use-azureml/ml-frameworks/tensorflow/training/train-tensorflow-resume-training/train-tensorflow-resume-training.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Resuming Tensorflow training from previous runIn this tutorial, you will resume a mnist model in TensorFlow from a previously submitted run. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)* Go through the [configuration notebook](../../../configuration.ipynb) to: * install the AML SDK * create a workspace and its configuration file (`config.json`)* Review the [tutorial](../train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) on single-node TensorFlow training using the SDK
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize workspaceInitialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
###Code
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecturecompute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "gpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# use get_status() to get a detailed status for the current cluster.
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`. Upload data to datastoreTo make data accessible for remote training, AML provides a convenient way to do so via a [Datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data). The datastore provides a mechanism for you to upload/download data to Azure Storage, and interact with it from your remote compute targets. If your data is already stored in Azure, or you download the data as part of your training script, you will not need to do this step. For this tutorial, although you can download the data in your training script, we will demonstrate how to upload the training data to a datastore and access it during training to illustrate the datastore functionality. First download the data from Yan LeCun's web site directly and save them in a data folder locally.
###Code
import os
import urllib
os.makedirs('./data/mnist', exist_ok=True)
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename = './data/mnist/train-images.gz')
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename = './data/mnist/train-labels.gz')
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename = './data/mnist/test-images.gz')
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename = './data/mnist/test-labels.gz')
###Output
_____no_output_____
###Markdown
Each workspace is associated with a default datastore. In this tutorial, we will upload the training data to this default datastore.
###Code
ds = ws.get_default_datastore()
print(ds.datastore_type, ds.account_name, ds.container_name)
###Output
_____no_output_____
###Markdown
Upload MNIST data to the default datastore.
###Code
ds.upload(src_dir='./data/mnist', target_path='mnist', overwrite=True, show_progress=True)
###Output
_____no_output_____
###Markdown
For convenience, let's get a reference to the datastore. In the next section, we can then pass this reference to our training script's `--data-folder` argument.
###Code
ds_data = ds.as_mount()
print(ds_data)
###Output
_____no_output_____
###Markdown
Train model on the remote compute Create a project directoryCreate a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on.
###Code
script_folder = './tf-resume-training'
os.makedirs(script_folder, exist_ok=True)
###Output
_____no_output_____
###Markdown
Copy the training script `tf_mnist_with_checkpoint.py` into this project directory.
###Code
import shutil
# the training logic is in the tf_mnist_with_checkpoint.py file.
shutil.copy('./tf_mnist_with_checkpoint.py', script_folder)
# the utils.py just helps loading data from the downloaded MNIST dataset into numpy arrays.
shutil.copy('./utils.py', script_folder)
###Output
_____no_output_____
###Markdown
Create an experimentCreate an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureexperiment) to track all the runs in your workspace for this distributed TensorFlow tutorial.
###Code
from azureml.core import Experiment
experiment_name = 'tf-resume-training'
experiment = Experiment(ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create a TensorFlow estimatorThe AML SDK's TensorFlow estimator enables you to easily submit TensorFlow training jobs for both single-node and distributed runs. For more information on the TensorFlow estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-tensorflow).The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release.
###Code
from azureml.train.dnn import TensorFlow
script_params={
'--data-folder': ds_data
}
estimator= TensorFlow(source_directory=script_folder,
compute_target=compute_target,
script_params=script_params,
entry_script='tf_mnist_with_checkpoint.py')
###Output
_____no_output_____
###Markdown
In the above code, we passed our training data reference `ds_data` to our script's `--data-folder` argument. This will 1) mount our datastore on the remote compute and 2) provide the path to the data zip file on our datastore. Submit job Run your experiment by submitting your estimator object. Note that this call is asynchronous.
###Code
run = experiment.submit(estimator)
print(run)
###Output
_____no_output_____
###Markdown
Monitor your runYou can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.
###Code
from azureml.widgets import RunDetails
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Alternatively, you can block until the script has completed training before running more code.
###Code
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Now let's resume the training from the above run First, we will get the DataPath to the outputs directory of the above run whichcontains the checkpoint files and/or model
###Code
model_location = run._get_outputs_datapath()
###Output
_____no_output_____
###Markdown
Now, we will create a new TensorFlow estimator and pass in the model location. On passing 'resume_from' parameter, a new entry in script_params is created with key as 'resume_from' and value as the model/checkpoint files location and the location gets automatically mounted on the compute target.
###Code
from azureml.train.dnn import TensorFlow
script_params={
'--data-folder': ds_data
}
estimator2 = TensorFlow(source_directory=script_folder,
compute_target=compute_target,
script_params=script_params,
entry_script='tf_mnist_with_checkpoint.py',
resume_from=model_location)
###Output
_____no_output_____
###Markdown
Now you can submit the experiment and it should resume from previous run's checkpoint files.
###Code
run2 = experiment.submit(estimator2)
print(run2)
run2.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Resuming Tensorflow training from previous runIn this tutorial, you will resume a mnist model in TensorFlow from a previously submitted run. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)* Go through the [configuration notebook](../../../configuration.ipynb) to: * install the AML SDK * create a workspace and its configuration file (`config.json`)* Review the [tutorial](../train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) on single-node TensorFlow training using the SDK
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize workspaceInitialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
###Code
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecturecompute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "gpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# use get_status() to get a detailed status for the current cluster.
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`. Create a Dataset for FilesA Dataset can reference single or multiple files in your datastores or public urls. The files can be of any format. Dataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. [Learn More](https://aka.ms/azureml/howto/createdatasets)
###Code
#initialize file dataset
from azureml.core.dataset import Dataset
web_paths = ['http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
]
dataset = Dataset.File.from_files(path = web_paths)
###Output
_____no_output_____
###Markdown
you may want to register datasets using the register() method to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.You can try get the dataset first to see if it's already registered.
###Code
dataset_registered = False
try:
temp = Dataset.get_by_name(workspace = ws, name = 'mnist-dataset')
dataset_registered = True
except:
print("The dataset mnist-dataset is not registered in workspace yet.")
if not dataset_registered:
#register dataset to workspace
dataset = dataset.register(workspace = ws,
name = 'mnist-dataset',
description='training and test dataset',
create_new_version=True)
# list the files referenced by dataset
dataset.to_path()
###Output
_____no_output_____
###Markdown
Train model on the remote compute Create a project directoryCreate a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on.
###Code
import os
script_folder = './tf-resume-training'
os.makedirs(script_folder, exist_ok=True)
###Output
_____no_output_____
###Markdown
Copy the training script `tf_mnist_with_checkpoint.py` into this project directory.
###Code
import shutil
# the training logic is in the tf_mnist_with_checkpoint.py file.
shutil.copy('./tf_mnist_with_checkpoint.py', script_folder)
# the utils.py just helps loading data from the downloaded MNIST dataset into numpy arrays.
shutil.copy('./utils.py', script_folder)
###Output
_____no_output_____
###Markdown
Create an experimentCreate an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureexperiment) to track all the runs in your workspace for this distributed TensorFlow tutorial.
###Code
from azureml.core import Experiment
experiment_name = 'tf-resume-training'
experiment = Experiment(ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create a TensorFlow estimatorThe AML SDK's TensorFlow estimator enables you to easily submit TensorFlow training jobs for both single-node and distributed runs. For more information on the TensorFlow estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-tensorflow).The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release.
###Code
from azureml.train.dnn import TensorFlow
script_params={
'--data-folder': dataset.as_named_input('mnist').as_mount()
}
estimator= TensorFlow(source_directory=script_folder,
compute_target=compute_target,
script_params=script_params,
entry_script='tf_mnist_with_checkpoint.py',
use_gpu=True,
pip_packages=['azureml-dataset-runtime[pandas,fuse]'])
###Output
_____no_output_____
###Markdown
In the above code, we passed our training data reference `ds_data` to our script's `--data-folder` argument. This will 1) mount our datastore on the remote compute and 2) provide the path to the data zip file on our datastore. Submit job Run your experiment by submitting your estimator object. Note that this call is asynchronous.
###Code
run = experiment.submit(estimator)
print(run)
###Output
_____no_output_____
###Markdown
Monitor your runYou can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.
###Code
from azureml.widgets import RunDetails
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Alternatively, you can block until the script has completed training before running more code.
###Code
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Now let's resume the training from the above run First, we will get the DataPath to the outputs directory of the above run whichcontains the checkpoint files and/or model
###Code
model_location = run._get_outputs_datapath()
###Output
_____no_output_____
###Markdown
Now, we will create a new TensorFlow estimator and pass in the model location. On passing 'resume_from' parameter, a new entry in script_params is created with key as 'resume_from' and value as the model/checkpoint files location and the location gets automatically mounted on the compute target.
###Code
from azureml.train.dnn import TensorFlow
script_params={
'--data-folder': dataset.as_named_input('mnist').as_mount()
}
estimator2 = TensorFlow(source_directory=script_folder,
compute_target=compute_target,
script_params=script_params,
entry_script='tf_mnist_with_checkpoint.py',
resume_from=model_location,
use_gpu=True,
pip_packages=['azureml-dataset-runtime[pandas,fuse]'])
###Output
_____no_output_____
###Markdown
Now you can submit the experiment and it should resume from previous run's checkpoint files.
###Code
run2 = experiment.submit(estimator2)
print(run2)
run2.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Resuming Tensorflow training from previous runIn this tutorial, you will resume a mnist model in TensorFlow from a previously submitted run. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)* Go through the [configuration notebook](../../../configuration.ipynb) to: * install the AML SDK * create a workspace and its configuration file (`config.json`)* Review the [tutorial](../train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) on single-node TensorFlow training using the SDK
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize workspaceInitialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
###Code
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecturecompute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "gpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# use get_status() to get a detailed status for the current cluster.
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`. Create a FileDatasetA FileDataset references single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. [Learn More](https://aka.ms/azureml/howto/createdatasets)
###Code
#initialize file dataset
from azureml.core.dataset import Dataset
web_paths = ['http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
]
dataset = Dataset.File.from_files(path = web_paths)
###Output
_____no_output_____
###Markdown
Use the register() method to register datasets to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
###Code
#register dataset to workspace
dataset = dataset.register(workspace = ws,
name = 'mnist dataset',
description='training and test dataset',
create_new_version=True)
# list the files referenced by dataset
dataset.to_path()
###Output
_____no_output_____
###Markdown
Train model on the remote compute Create a project directoryCreate a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on.
###Code
import os
script_folder = './tf-resume-training'
os.makedirs(script_folder, exist_ok=True)
###Output
_____no_output_____
###Markdown
Copy the training script `tf_mnist_with_checkpoint.py` into this project directory.
###Code
import shutil
# the training logic is in the tf_mnist_with_checkpoint.py file.
shutil.copy('./tf_mnist_with_checkpoint.py', script_folder)
# the utils.py just helps loading data from the downloaded MNIST dataset into numpy arrays.
shutil.copy('./utils.py', script_folder)
###Output
_____no_output_____
###Markdown
Create an experimentCreate an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureexperiment) to track all the runs in your workspace for this distributed TensorFlow tutorial.
###Code
from azureml.core import Experiment
experiment_name = 'tf-resume-training'
experiment = Experiment(ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create a TensorFlow estimatorThe AML SDK's TensorFlow estimator enables you to easily submit TensorFlow training jobs for both single-node and distributed runs. For more information on the TensorFlow estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-tensorflow).The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release.
###Code
from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies
# set up environment\n
env = Environment('my_env')
# ensure latest azureml-dataprep and other required packages installed in the environment
cd = CondaDependencies.create(pip_packages=['keras',
'azureml-sdk',
'tensorflow-gpu',
'matplotlib',
'azureml-dataprep[pandas,fuse]>=1.1.14'])
env.python.conda_dependencies = cd
from azureml.train.dnn import TensorFlow
script_params={
'--data-folder': dataset.as_named_input('mnist').as_mount()
}
estimator= TensorFlow(source_directory=script_folder,
compute_target=compute_target,
script_params=script_params,
entry_script='tf_mnist_with_checkpoint.py',
environment_definition= env)
dataset.to_path()
###Output
_____no_output_____
###Markdown
In the above code, we passed our training data reference `ds_data` to our script's `--data-folder` argument. This will 1) mount our datastore on the remote compute and 2) provide the path to the data zip file on our datastore. Submit job Run your experiment by submitting your estimator object. Note that this call is asynchronous.
###Code
run = experiment.submit(estimator)
print(run)
###Output
_____no_output_____
###Markdown
Monitor your runYou can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.
###Code
from azureml.widgets import RunDetails
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Alternatively, you can block until the script has completed training before running more code.
###Code
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Now let's resume the training from the above run First, we will get the DataPath to the outputs directory of the above run whichcontains the checkpoint files and/or model
###Code
model_location = run._get_outputs_datapath()
###Output
_____no_output_____
###Markdown
Now, we will create a new TensorFlow estimator and pass in the model location. On passing 'resume_from' parameter, a new entry in script_params is created with key as 'resume_from' and value as the model/checkpoint files location and the location gets automatically mounted on the compute target.
###Code
from azureml.train.dnn import TensorFlow
script_params={
'--data-folder': dataset.as_named_input('mnist').as_mount()
}
estimator2 = TensorFlow(source_directory=script_folder,
compute_target=compute_target,
script_params=script_params,
entry_script='tf_mnist_with_checkpoint.py',
resume_from=model_location,
environment_definition = env)
###Output
_____no_output_____
###Markdown
Now you can submit the experiment and it should resume from previous run's checkpoint files.
###Code
run2 = experiment.submit(estimator2)
print(run2)
run2.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Resuming Tensorflow training from previous runIn this tutorial, you will resume a mnist model in TensorFlow from a previously submitted run. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)* Go through the [configuration notebook](../../../configuration.ipynb) to: * install the AML SDK * create a workspace and its configuration file (`config.json`)* Review the [tutorial](../train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) on single-node TensorFlow training using the SDK
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize workspaceInitialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
###Code
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecturecompute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "gpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# use get_status() to get a detailed status for the current cluster.
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`. Create a Dataset for FilesA Dataset can reference single or multiple files in your datastores or public urls. The files can be of any format. Dataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. [Learn More](https://aka.ms/azureml/howto/createdatasets)
###Code
#initialize file dataset
from azureml.core.dataset import Dataset
web_paths = ['http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
]
dataset = Dataset.File.from_files(path = web_paths)
###Output
_____no_output_____
###Markdown
you may want to register datasets using the register() method to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.You can try get the dataset first to see if it's already registered.
###Code
dataset_registered = False
try:
temp = Dataset.get_by_name(workspace = ws, name = 'mnist-dataset')
dataset_registered = True
except:
print("The dataset mnist-dataset is not registered in workspace yet.")
if not dataset_registered:
#register dataset to workspace
dataset = dataset.register(workspace = ws,
name = 'mnist-dataset',
description='training and test dataset',
create_new_version=True)
# list the files referenced by dataset
dataset.to_path()
###Output
_____no_output_____
###Markdown
Train model on the remote compute Create a project directoryCreate a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on.
###Code
import os
script_folder = './tf-resume-training'
os.makedirs(script_folder, exist_ok=True)
###Output
_____no_output_____
###Markdown
Copy the training script `tf_mnist_with_checkpoint.py` into this project directory.
###Code
import shutil
# the training logic is in the tf_mnist_with_checkpoint.py file.
shutil.copy('./tf_mnist_with_checkpoint.py', script_folder)
# the utils.py just helps loading data from the downloaded MNIST dataset into numpy arrays.
shutil.copy('./utils.py', script_folder)
###Output
_____no_output_____
###Markdown
Create an experimentCreate an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureexperiment) to track all the runs in your workspace for this distributed TensorFlow tutorial.
###Code
from azureml.core import Experiment
experiment_name = 'tf-resume-training'
experiment = Experiment(ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create a TensorFlow estimatorThe AML SDK's TensorFlow estimator enables you to easily submit TensorFlow training jobs for both single-node and distributed runs. For more information on the TensorFlow estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-tensorflow).The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release.
###Code
from azureml.train.dnn import TensorFlow
script_params={
'--data-folder': dataset.as_named_input('mnist').as_mount()
}
estimator= TensorFlow(source_directory=script_folder,
compute_target=compute_target,
script_params=script_params,
entry_script='tf_mnist_with_checkpoint.py',
use_gpu=True,
pip_packages=['azureml-dataprep[pandas,fuse]'])
###Output
_____no_output_____
###Markdown
In the above code, we passed our training data reference `ds_data` to our script's `--data-folder` argument. This will 1) mount our datastore on the remote compute and 2) provide the path to the data zip file on our datastore. Submit job Run your experiment by submitting your estimator object. Note that this call is asynchronous.
###Code
run = experiment.submit(estimator)
print(run)
###Output
_____no_output_____
###Markdown
Monitor your runYou can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.
###Code
from azureml.widgets import RunDetails
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Alternatively, you can block until the script has completed training before running more code.
###Code
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Now let's resume the training from the above run First, we will get the DataPath to the outputs directory of the above run whichcontains the checkpoint files and/or model
###Code
model_location = run._get_outputs_datapath()
###Output
_____no_output_____
###Markdown
Now, we will create a new TensorFlow estimator and pass in the model location. On passing 'resume_from' parameter, a new entry in script_params is created with key as 'resume_from' and value as the model/checkpoint files location and the location gets automatically mounted on the compute target.
###Code
from azureml.train.dnn import TensorFlow
script_params={
'--data-folder': dataset.as_named_input('mnist').as_mount()
}
estimator2 = TensorFlow(source_directory=script_folder,
compute_target=compute_target,
script_params=script_params,
entry_script='tf_mnist_with_checkpoint.py',
resume_from=model_location,
use_gpu=True,
pip_packages=['azureml-dataprep[pandas,fuse]'])
###Output
_____no_output_____
###Markdown
Now you can submit the experiment and it should resume from previous run's checkpoint files.
###Code
run2 = experiment.submit(estimator2)
print(run2)
run2.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Resuming Tensorflow training from previous runIn this tutorial, you will resume a mnist model in TensorFlow from a previously submitted run. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)* Go through the [configuration notebook](../../../configuration.ipynb) to: * install the AML SDK * create a workspace and its configuration file (`config.json`)* Review the [tutorial](../train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) on single-node TensorFlow training using the SDK
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize workspaceInitialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
###Code
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Create or Attach existing AmlComputeYou will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecturecompute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "gpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# use get_status() to get a detailed status for the current cluster.
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`. Create a Dataset for FilesA Dataset can reference single or multiple files in your datastores or public urls. The files can be of any format. Dataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. [Learn More](https://aka.ms/azureml/howto/createdatasets)
###Code
#initialize file dataset
from azureml.core.dataset import Dataset
web_paths = ['http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
]
dataset = Dataset.File.from_files(path = web_paths)
###Output
_____no_output_____
###Markdown
you may want to register datasets using the register() method to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
###Code
#register dataset to workspace
dataset = dataset.register(workspace = ws,
name = 'mnist dataset',
description='training and test dataset',
create_new_version=True)
# list the files referenced by dataset
dataset.to_path()
###Output
_____no_output_____
###Markdown
Train model on the remote compute Create a project directoryCreate a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on.
###Code
import os
script_folder = './tf-resume-training'
os.makedirs(script_folder, exist_ok=True)
###Output
_____no_output_____
###Markdown
Copy the training script `tf_mnist_with_checkpoint.py` into this project directory.
###Code
import shutil
# the training logic is in the tf_mnist_with_checkpoint.py file.
shutil.copy('./tf_mnist_with_checkpoint.py', script_folder)
# the utils.py just helps loading data from the downloaded MNIST dataset into numpy arrays.
shutil.copy('./utils.py', script_folder)
###Output
_____no_output_____
###Markdown
Create an experimentCreate an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureexperiment) to track all the runs in your workspace for this distributed TensorFlow tutorial.
###Code
from azureml.core import Experiment
experiment_name = 'tf-resume-training'
experiment = Experiment(ws, name=experiment_name)
###Output
_____no_output_____
###Markdown
Create a TensorFlow estimatorThe AML SDK's TensorFlow estimator enables you to easily submit TensorFlow training jobs for both single-node and distributed runs. For more information on the TensorFlow estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-tensorflow).The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release.
###Code
from azureml.train.dnn import TensorFlow
script_params={
'--data-folder': dataset.as_named_input('mnist').as_mount()
}
estimator= TensorFlow(source_directory=script_folder,
compute_target=compute_target,
script_params=script_params,
entry_script='tf_mnist_with_checkpoint.py',
use_gpu=True,
pip_packages=['azureml-dataprep[pandas,fuse]'])
###Output
_____no_output_____
###Markdown
In the above code, we passed our training data reference `ds_data` to our script's `--data-folder` argument. This will 1) mount our datastore on the remote compute and 2) provide the path to the data zip file on our datastore. Submit job Run your experiment by submitting your estimator object. Note that this call is asynchronous.
###Code
run = experiment.submit(estimator)
print(run)
###Output
_____no_output_____
###Markdown
Monitor your runYou can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.
###Code
from azureml.widgets import RunDetails
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Alternatively, you can block until the script has completed training before running more code.
###Code
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Now let's resume the training from the above run First, we will get the DataPath to the outputs directory of the above run whichcontains the checkpoint files and/or model
###Code
model_location = run._get_outputs_datapath()
###Output
_____no_output_____
###Markdown
Now, we will create a new TensorFlow estimator and pass in the model location. On passing 'resume_from' parameter, a new entry in script_params is created with key as 'resume_from' and value as the model/checkpoint files location and the location gets automatically mounted on the compute target.
###Code
from azureml.train.dnn import TensorFlow
script_params={
'--data-folder': dataset.as_named_input('mnist').as_mount()
}
estimator2 = TensorFlow(source_directory=script_folder,
compute_target=compute_target,
script_params=script_params,
entry_script='tf_mnist_with_checkpoint.py',
resume_from=model_location,
use_gpu=True,
pip_packages=['azureml-dataprep[pandas,fuse]'])
###Output
_____no_output_____
###Markdown
Now you can submit the experiment and it should resume from previous run's checkpoint files.
###Code
run2 = experiment.submit(estimator2)
print(run2)
run2.wait_for_completion(show_output=True)
###Output
_____no_output_____ |
tutorials/streamlit_notebooks/healthcare/ER_ICD10_CM.ipynb | ###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/ER_ICD10_CM.ipynb) **ICD10-CM coding** To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Import license keys
###Code
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
###Output
SparkNLP Version: 2.6.0
SparkNLP-JSL Version: 2.6.0
###Markdown
Install dependencies
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
###Output
openjdk version "11.0.8" 2020-07-14
OpenJDK Runtime Environment (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1)
OpenJDK 64-Bit Server VM (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1, mixed mode, sharing)
[K |████████████████████████████████| 215.7MB 68kB/s
[K |████████████████████████████████| 204kB 41.7MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
Collecting spark-nlp==2.6.0
[?25l Downloading https://files.pythonhosted.org/packages/e4/30/1bd0abcc97caed518efe527b9146897255dffcf71c4708586a82ea9eb29a/spark_nlp-2.6.0-py2.py3-none-any.whl (125kB)
[K |████████████████████████████████| 133kB 3.2MB/s
[?25hInstalling collected packages: spark-nlp
Successfully installed spark-nlp-2.6.0
Looking in indexes: https://pypi.org/simple, https://pypi.johnsnowlabs.com/2.6.0-8388813d58b67fa25bf9cf603393363af96dba16
Collecting spark-nlp-jsl==2.6.0
Downloading https://pypi.johnsnowlabs.com/2.6.0-8388813d58b67fa25bf9cf603393363af96dba16/spark-nlp-jsl/spark_nlp_jsl-2.6.0-py3-none-any.whl
Requirement already satisfied, skipping upgrade: spark-nlp==2.6.0 in /usr/local/lib/python3.6/dist-packages (from spark-nlp-jsl==2.6.0) (2.6.0)
Installing collected packages: spark-nlp-jsl
Successfully installed spark-nlp-jsl-2.6.0
###Markdown
Import dependencies into Python
###Code
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
###Output
_____no_output_____
###Markdown
Start the Spark session
###Code
spark = sparknlp_jsl.start(secret)
###Output
_____no_output_____
###Markdown
2. Select the Entity Resolver model and construct the pipeline Select the models:**ICD10 Entity Resolver models:**1. **chunkresolve_icd10cm_clinical**2. **chunkresolve_icd10cm_diseases_clinical**3. **chunkresolve_icd10cm_injuries_clinical**4. **chunkresolve_icd10cm_musculoskeletal_clinical**5. **chunkresolve_icd10cm_neoplasms_clinical**6. **chunkresolve_icd10cm_puerile_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
#ner and entity resolver mapping
ner_er_dict = {'chunkresolve_icd10cm_clinical': 'ner_clinical',
'chunkresolve_icd10cm_diseases_clinical': 'ner_diseases',
'chunkresolve_icd10cm_injuries_clinical': 'ner_jsl',
'chunkresolve_icd10cm_musculoskeletal_clinical': 'ner_jsl',
'chunkresolve_icd10cm_neoplasms_clinical': 'ner_jsl',
'chunkresolve_icd10cm_puerile_clinical': 'ner_clinical'}
# ER models are specfic to the codes they are trained on, so we need to filter out entities that will cause noise.
wl_er_dict = {'chunkresolve_icd10cm_clinical': ['PROBLEM'],
'chunkresolve_icd10cm_diseases_clinical': ['Disease'],
'chunkresolve_icd10cm_injuries_clinical': ['Diagnosis'],
'chunkresolve_icd10cm_musculoskeletal_clinical': ['Diagnosis'],
'chunkresolve_icd10cm_neoplasms_clinical': ['Diagnosis'],
'chunkresolve_icd10cm_puerile_clinical': ['PROBLEM']}
# Change this to the model you want to use and re-run the cells below.
model = 'chunkresolve_icd10cm_clinical'
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
ner_model = NerDLModel().pretrained(ner_er_dict[model], 'en', 'clinical/models')\
.setInputCols("sentences", "tokens", "embeddings")\
.setOutputCol("ner_tags")
#using defined whitelist. You can define your own as well.
ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "ner_tags"])\
.setOutputCol("ner_chunk").setWhiteList(wl_er_dict[model])
chunk_embeddings = ChunkEmbeddings()\
.setInputCols("ner_chunk", "embeddings")\
.setOutputCol("chunk_embeddings")
entity_resolver = \
ChunkEntityResolverModel.pretrained(model,"en","clinical/models")\
.setInputCols("tokens","chunk_embeddings").setOutputCol("resolution")
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
embeddings,
ner_model,
ner_chunker,
chunk_embeddings,
entity_resolver])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = sparknlp.base.LightPipeline(pipeline_model)
###Output
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_clinical download started this may take some time.
Approximate size to download 13.8 MB
[OK!]
chunkresolve_icd10cm_clinical download started this may take some time.
Approximate size to download 166.3 MB
[OK!]
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""Nature and course of the diagnosis has been discussed with the patient. Based on her presentation without any history of obvious fall or trauma and past history of malignant melanoma, this appears to be a pathological fracture of the left proximal hip. At the present time, I would recommend obtaining a bone scan and repeat x-rays, which will include AP pelvis, femur, hip including knee. She denies any pain elsewhere. She does have a past history of back pain and sciatica, but at the present time, this appears to be a metastatic bone lesion with pathological fracture. I have discussed the case with Dr. X and recommended oncology consultation.
With the above fracture and presentation, she needs a left hip hemiarthroplasty versus calcar hemiarthroplasty, cemented type. Indication, risk, and benefits of left hip hemiarthroplasty has been discussed with the patient, which includes, but not limited to bleeding, infection, nerve injury, blood vessel injury, dislocation early and late, persistent pain, leg length discrepancy, myositis ossificans, intraoperative fracture, prosthetic fracture, need for conversion to total hip replacement surgery, revision surgery, DVT, pulmonary embolism, risk of anesthesia, need for blood transfusion, and cardiac arrest. She understands above and is willing to undergo further procedure. The goal and the functional outcome have been explained. Further plan will be discussed with her once we obtain the bone scan and the radiographic studies. We will also await for the oncology feedback and clearance.""",
]
###Output
_____no_output_____
###Markdown
4. Run the pipeline
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
###Output
_____no_output_____
###Markdown
5. Visualize Full Pipeline
###Code
result.select(
F.explode(
F.arrays_zip('resolution.metadata', 'resolution.begin' , 'resolution.end', 'resolution.result')
).alias('cols')
).select(
F.expr("cols['0']['token']").alias('token/chunk'),
F.expr("cols['1']").alias('begin'),
F.expr("cols['2']").alias('end'),
F.expr("cols['0']['resolved_text']").alias('resolved_text'),
F.expr("cols['3']").alias('icd10_code'),
).toPandas()
###Output
_____no_output_____
###Markdown
Light Pipeline
###Code
light_result[0]['resolution']
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/ER_ICD10_CM.ipynb) **ICD10-CM coding** To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Import license keys
###Code
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
###Output
SparkNLP Version: 2.6.0
SparkNLP-JSL Version: 2.6.0
###Markdown
Install dependencies
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
###Output
openjdk version "11.0.8" 2020-07-14
OpenJDK Runtime Environment (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1)
OpenJDK 64-Bit Server VM (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1, mixed mode, sharing)
[K |████████████████████████████████| 215.7MB 55kB/s
[K |████████████████████████████████| 204kB 46.9MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
Collecting spark-nlp==2.6.0
[?25l Downloading https://files.pythonhosted.org/packages/e4/30/1bd0abcc97caed518efe527b9146897255dffcf71c4708586a82ea9eb29a/spark_nlp-2.6.0-py2.py3-none-any.whl (125kB)
[K |████████████████████████████████| 133kB 2.9MB/s
[?25hInstalling collected packages: spark-nlp
Successfully installed spark-nlp-2.6.0
Looking in indexes: https://pypi.org/simple, https://pypi.johnsnowlabs.com/2.6.0-8388813d58b67fa25bf9cf603393363af96dba16
Collecting spark-nlp-jsl==2.6.0
Requirement already satisfied, skipping upgrade: spark-nlp==2.6.0 in /usr/local/lib/python3.6/dist-packages (from spark-nlp-jsl==2.6.0) (2.6.0)
Installing collected packages: spark-nlp-jsl
Successfully installed spark-nlp-jsl-2.6.0
###Markdown
Import dependencies into Python
###Code
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
###Output
_____no_output_____
###Markdown
Start the Spark session
###Code
spark = sparknlp_jsl.start(secret)
###Output
_____no_output_____
###Markdown
2. Select the Entity Resolver model and construct the pipeline **NOTE: The mapping below is an example of how ICD10 resolvers work with different NER models. You can choose different combinations according to your input data and requirements.** Select the models:**ICD10 Entity Resolver models:**1. **chunkresolve_icd10cm_clinical**2. **chunkresolve_icd10cm_diseases_clinical**3. **chunkresolve_icd10cm_injuries_clinical**4. **chunkresolve_icd10cm_musculoskeletal_clinical**5. **chunkresolve_icd10cm_neoplasms_clinical**6. **chunkresolve_icd10cm_puerile_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
#ner and entity resolver mapping
ner_er_dict = {'chunkresolve_icd10cm_clinical': 'ner_clinical',
'chunkresolve_icd10cm_diseases_clinical': 'ner_diseases',
'chunkresolve_icd10cm_injuries_clinical': 'ner_clinical',
'chunkresolve_icd10cm_musculoskeletal_clinical': 'ner_clinical',
'chunkresolve_icd10cm_neoplasms_clinical': 'ner_bionlp',
'chunkresolve_icd10cm_puerile_clinical': 'ner_jsl'}
# ER models are specfic to the codes they are trained on, so we need to filter out entities that will cause noise.
wl_er_dict = {'chunkresolve_icd10cm_clinical': ['PROBLEM'],
'chunkresolve_icd10cm_diseases_clinical': ['Disease'],
'chunkresolve_icd10cm_injuries_clinical': ['PROBLEM'],
'chunkresolve_icd10cm_musculoskeletal_clinical': ['PROBLEM'],
'chunkresolve_icd10cm_neoplasms_clinical': ['CANCER','PATHOLOGICAL_FORMATION'],
'chunkresolve_icd10cm_puerile_clinical': ['PROBLEM']}
# Change this to the model you want to use and re-run the cells below.
model = 'chunkresolve_icd10cm_clinical'
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
ner_model = NerDLModel().pretrained(ner_er_dict[model], 'en', 'clinical/models')\
.setInputCols("sentences", "tokens", "embeddings")\
.setOutputCol("ner_tags")
#using defined whitelist. You can define your own as well.
ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "ner_tags"])\
.setOutputCol("ner_chunk").setWhiteList(wl_er_dict[model])
chunk_embeddings = ChunkEmbeddings()\
.setInputCols("ner_chunk", "embeddings")\
.setOutputCol("chunk_embeddings")
entity_resolver = \
ChunkEntityResolverModel.pretrained(model,"en","clinical/models")\
.setInputCols("tokens","chunk_embeddings").setOutputCol("resolution")
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
embeddings,
ner_model,
ner_chunker,
chunk_embeddings,
entity_resolver])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = sparknlp.base.LightPipeline(pipeline_model)
###Output
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_clinical download started this may take some time.
Approximate size to download 13.8 MB
[OK!]
chunkresolve_icd10cm_clinical download started this may take some time.
Approximate size to download 166.3 MB
[OK!]
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""The patient is a 5-month-old infant who presented initially on Monday with a cold, cough, and runny nose for 2 days. Mom states she had no fever. Her appetite was good but she was spitting up a lot. She had no difficulty breathing and her cough was described as dry and hacky. At that time, physical exam showed a right TM, which was red. Left TM was okay. She was fairly congested but looked happy and playful. She was started on Amoxil and Aldex and we told to recheck in 2 weeks to recheck her ear. Mom returned to clinic again today because she got much worse overnight. She was having difficulty breathing. She was much more congested and her appetite had decreased significantly today. She also spiked a temperature yesterday of 102.6 and always having trouble sleeping secondary to congestion.""",
]
###Output
_____no_output_____
###Markdown
4. Run the pipeline
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
###Output
_____no_output_____
###Markdown
5. Visualize Full Pipeline
###Code
result.select(
F.explode(
F.arrays_zip('ner_chunk.result',
'ner_chunk.begin',
'ner_chunk.end',
'ner_chunk.metadata',
'resolution.metadata', 'resolution.result')
).alias('cols')
).select(
F.expr("cols['0']").alias('chunk'),
F.expr("cols['1']").alias('begin'),
F.expr("cols['2']").alias('end'),
F.expr("cols['3']['entity']").alias('entity'),
F.expr("cols['4']['resolved_text']").alias('icd10_description'),
F.expr("cols['5']").alias('icd10_code'),
).toPandas()
###Output
_____no_output_____
###Markdown
Light Pipeline
###Code
light_result[0]['resolution']
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/ER_ICD10_CM.ipynb) **ICD10-CM coding** To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens. 1. Colab Setup Import license keys
###Code
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
###Output
_____no_output_____
###Markdown
Install dependencies
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
###Output
_____no_output_____
###Markdown
Import dependencies into Python
###Code
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
###Output
_____no_output_____
###Markdown
Start the Spark session
###Code
spark = sparknlp_jsl.start(secret)
###Output
_____no_output_____
###Markdown
2. Select the Entity Resolver model and construct the pipeline **NOTE: The mapping below is an example of how ICD10 resolvers work with different NER models. You can choose different combinations according to your input data and requirements.** Select the models:**ICD10 Entity Resolver models:**1. **chunkresolve_icd10cm_clinical**2. **chunkresolve_icd10cm_diseases_clinical**3. **chunkresolve_icd10cm_injuries_clinical**4. **chunkresolve_icd10cm_musculoskeletal_clinical**5. **chunkresolve_icd10cm_neoplasms_clinical**6. **chunkresolve_icd10cm_puerile_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
#ner and entity resolver mapping
ner_er_dict = {'chunkresolve_icd10cm_clinical': 'ner_clinical',
'chunkresolve_icd10cm_diseases_clinical': 'ner_diseases',
'chunkresolve_icd10cm_injuries_clinical': 'ner_clinical',
'chunkresolve_icd10cm_musculoskeletal_clinical': 'ner_clinical',
'chunkresolve_icd10cm_neoplasms_clinical': 'ner_bionlp',
'chunkresolve_icd10cm_puerile_clinical': 'ner_jsl'}
# ER models are specfic to the codes they are trained on, so we need to filter out entities that will cause noise.
wl_er_dict = {'chunkresolve_icd10cm_clinical': ['PROBLEM'],
'chunkresolve_icd10cm_diseases_clinical': ['Disease'],
'chunkresolve_icd10cm_injuries_clinical': ['PROBLEM'],
'chunkresolve_icd10cm_musculoskeletal_clinical': ['PROBLEM'],
'chunkresolve_icd10cm_neoplasms_clinical': ['CANCER','PATHOLOGICAL_FORMATION'],
'chunkresolve_icd10cm_puerile_clinical': ['PROBLEM']}
# Change this to the model you want to use and re-run the cells below.
model = 'chunkresolve_icd10cm_clinical'
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
ner_model = NerDLModel().pretrained(ner_er_dict[model], 'en', 'clinical/models')\
.setInputCols("sentences", "tokens", "embeddings")\
.setOutputCol("ner_tags")
#using defined whitelist. You can define your own as well.
ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "ner_tags"])\
.setOutputCol("ner_chunk").setWhiteList(wl_er_dict[model])
chunk_embeddings = ChunkEmbeddings()\
.setInputCols("ner_chunk", "embeddings")\
.setOutputCol("chunk_embeddings")
entity_resolver = \
ChunkEntityResolverModel.pretrained(model,"en","clinical/models")\
.setInputCols("tokens","chunk_embeddings").setOutputCol("resolution")
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
embeddings,
ner_model,
ner_chunker,
chunk_embeddings,
entity_resolver])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = sparknlp.base.LightPipeline(pipeline_model)
###Output
_____no_output_____
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""The patient is a 5-month-old infant who presented initially on Monday with a cold, cough, and runny nose for 2 days. Mom states she had no fever. Her appetite was good but she was spitting up a lot. She had no difficulty breathing and her cough was described as dry and hacky. At that time, physical exam showed a right TM, which was red. Left TM was okay. She was fairly congested but looked happy and playful. She was started on Amoxil and Aldex and we told to recheck in 2 weeks to recheck her ear. Mom returned to clinic again today because she got much worse overnight. She was having difficulty breathing. She was much more congested and her appetite had decreased significantly today. She also spiked a temperature yesterday of 102.6 and always having trouble sleeping secondary to congestion.""",
]
###Output
_____no_output_____
###Markdown
4. Run the pipeline
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
###Output
_____no_output_____
###Markdown
5. Visualize Full Pipeline
###Code
result.select(
F.explode(
F.arrays_zip('ner_chunk.result',
'ner_chunk.begin',
'ner_chunk.end',
'ner_chunk.metadata',
'resolution.metadata', 'resolution.result')
).alias('cols')
).select(
F.expr("cols['0']").alias('chunk'),
F.expr("cols['1']").alias('begin'),
F.expr("cols['2']").alias('end'),
F.expr("cols['3']['entity']").alias('entity'),
F.expr("cols['4']['resolved_text']").alias('icd10_description'),
F.expr("cols['5']").alias('icd10_code'),
).toPandas()
###Output
_____no_output_____
###Markdown
Light Pipeline
###Code
light_result[0]['resolution']
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/ER_ICD10_CM.ipynb) **ICD10-CM coding** To run this yourself, you will need to upload your license keys to the notebook. Just Run The Cell Below in order to do that. Also You can open the file explorer on the left side of the screen and upload `license_keys.json` to the folder that opens.Otherwise, you can look at the example outputs at the bottom of the notebook. 1. Colab Setup Import license keys
###Code
import os
import json
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
###Output
_____no_output_____
###Markdown
Install dependencies
###Code
%%capture
for k,v in license_keys.items():
%set_env $k=$v
!wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh
!bash jsl_colab_setup.sh
# Install Spark NLP Display for visualization
!pip install --ignore-installed spark-nlp-display
###Output
_____no_output_____
###Markdown
Import dependencies into Python
###Code
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
###Output
_____no_output_____
###Markdown
Start the Spark session
###Code
spark = sparknlp_jsl.start(license_keys['SECRET'])
# manually start session
# params = {"spark.driver.memory" : "16G",
# "spark.kryoserializer.buffer.max" : "2000M",
# "spark.driver.maxResultSize" : "2000M"}
# spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
###Output
_____no_output_____
###Markdown
2. Select the Entity Resolver model and construct the pipeline **NOTE: The mapping below is an example of how ICD10 resolvers work with different NER models. You can choose different combinations according to your input data and requirements.** Select the models:**ICD10 Entity Resolver models:**1. **chunkresolve_icd10cm_clinical**2. **chunkresolve_icd10cm_diseases_clinical**3. **chunkresolve_icd10cm_injuries_clinical**4. **chunkresolve_icd10cm_musculoskeletal_clinical**5. **chunkresolve_icd10cm_neoplasms_clinical**6. **chunkresolve_icd10cm_puerile_clinical**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
###Code
#ner and entity resolver mapping
ner_er_dict = {'chunkresolve_icd10cm_clinical': 'ner_clinical',
'chunkresolve_icd10cm_diseases_clinical': 'ner_diseases',
'chunkresolve_icd10cm_injuries_clinical': 'ner_clinical',
'chunkresolve_icd10cm_musculoskeletal_clinical': 'ner_clinical',
'chunkresolve_icd10cm_neoplasms_clinical': 'ner_bionlp',
'chunkresolve_icd10cm_puerile_clinical': 'ner_jsl'}
# ER models are specfic to the codes they are trained on, so we need to filter out entities that will cause noise.
wl_er_dict = {'chunkresolve_icd10cm_clinical': ['PROBLEM'],
'chunkresolve_icd10cm_diseases_clinical': ['Disease'],
'chunkresolve_icd10cm_injuries_clinical': ['PROBLEM'],
'chunkresolve_icd10cm_musculoskeletal_clinical': ['PROBLEM'],
'chunkresolve_icd10cm_neoplasms_clinical': ['CANCER','PATHOLOGICAL_FORMATION'],
'chunkresolve_icd10cm_puerile_clinical': ['PROBLEM']}
# Change this to the model you want to use and re-run the cells below.
model = 'chunkresolve_icd10cm_clinical'
###Output
_____no_output_____
###Markdown
Create the pipeline
###Code
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
ner_model = MedicalNerModel.pretrained(ner_er_dict[model], "en", "clinical/models") \
.setInputCols(["sentences", "tokens", "embeddings"])\
.setOutputCol("ner_tags")
#using defined whitelist. You can define your own as well.
ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "ner_tags"])\
.setOutputCol("ner_chunk").setWhiteList(wl_er_dict[model])
chunk_embeddings = ChunkEmbeddings()\
.setInputCols("ner_chunk", "embeddings")\
.setOutputCol("chunk_embeddings")
entity_resolver = \
ChunkEntityResolverModel.pretrained(model,"en","clinical/models")\
.setInputCols("tokens","chunk_embeddings").setOutputCol("resolution")
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
embeddings,
ner_model,
ner_chunker,
chunk_embeddings,
entity_resolver])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = sparknlp.base.LightPipeline(pipeline_model)
###Output
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_clinical download started this may take some time.
Approximate size to download 13.9 MB
[OK!]
chunkresolve_icd10cm_clinical download started this may take some time.
Approximate size to download 166.2 MB
[OK!]
###Markdown
3. Create example inputs
###Code
# Enter examples as strings in this array
input_list = [
"""The patient is a 5-month-old infant who presented initially on Monday with a cold, cough, and runny nose for 2 days. Mom states she had no fever. Her appetite was good but she was spitting up a lot. She had no difficulty breathing and her cough was described as dry and hacky. At that time, physical exam showed a right TM, which was red. Left TM was okay. She was fairly congested but looked happy and playful. She was started on Amoxil and Aldex and we told to recheck in 2 weeks to recheck her ear. Mom returned to clinic again today because she got much worse overnight. She was having difficulty breathing. She was much more congested and her appetite had decreased significantly today. She also spiked a temperature yesterday of 102.6 and always having trouble sleeping secondary to congestion.""",
]
###Output
_____no_output_____
###Markdown
4. Run the pipeline
###Code
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
###Output
_____no_output_____
###Markdown
5. Visualize Full Pipeline
###Code
result.select(
F.explode(
F.arrays_zip('ner_chunk.result',
'ner_chunk.begin',
'ner_chunk.end',
'ner_chunk.metadata',
'resolution.metadata', 'resolution.result')
).alias('cols')
).select(
F.expr("cols['0']").alias('chunk'),
F.expr("cols['1']").alias('begin'),
F.expr("cols['2']").alias('end'),
F.expr("cols['3']['entity']").alias('entity'),
F.expr("cols['4']['resolved_text']").alias('icd10_description'),
F.expr("cols['5']").alias('icd10_code'),
).show(truncate=False)
###Output
+--------------------+-----+---+-------+------------------------------------------------------+----------+
|chunk |begin|end|entity |icd10_description |icd10_code|
+--------------------+-----+---+-------+------------------------------------------------------+----------+
|a cold, cough |75 |87 |PROBLEM|Chronic obstructive pulmonary disease, unspecified |J449 |
|runny nose |94 |103|PROBLEM|Nasal congestion |R0981 |
|fever |139 |143|PROBLEM|O'nyong-nyong fever |A921 |
|difficulty breathing|210 |229|PROBLEM|Shortness of breath |R0602 |
|her cough |235 |243|PROBLEM|Cough |R05 |
|dry |262 |264|PROBLEM|Dry beriberi |E5111 |
|hacky |270 |274|PROBLEM|Encounter for screening for malignant neoplasm of skin|Z1283 |
|a right TM |312 |321|PROBLEM|Pingueculitis, right eye |H10811 |
|red |334 |336|PROBLEM|Leptospirosis, unspecified |A279 |
|fairly congested |365 |380|PROBLEM|Edema, unspecified |R609 |
|much worse overnight|553 |572|PROBLEM|Hypersomnia, unspecified |G4710 |
|difficulty breathing|590 |609|PROBLEM|Shortness of breath |R0602 |
|much more congested |620 |638|PROBLEM|Hypersomnia, unspecified |G4710 |
|trouble sleeping |759 |774|PROBLEM|Activity, sleeping |Y9384 |
|congestion |789 |798|PROBLEM|Nasal congestion |R0981 |
+--------------------+-----+---+-------+------------------------------------------------------+----------+
###Markdown
Light Pipeline
###Code
from sparknlp_display import EntityResolverVisualizer
vis = EntityResolverVisualizer()
## To set custom label colors:
vis.set_label_colors({'TREATMENT':'#800080', 'PROBLEM':'#77b5fe'})
vis.display(light_result[0], 'ner_chunk', 'resolution', 'document')
###Output
_____no_output_____ |
Andrea_Christelle_LS_DS_131_Statistics_Probability_Assignment.ipynb | ###Markdown
*Data Science Unit 1 Sprint 3 Assignment 1* Apply the t-test to real dataYour assignment is to determine which issues have "statistically significant" differences between political parties in this [1980s congressional voting data](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records). The data consists of 435 instances (one for each congressperson), a class (democrat or republican), and 16 binary attributes (yes or no for voting for or against certain issues). Be aware - there are missing values!Your goals:1. Load and clean the data (or determine the best method to drop observations when running tests)2. Using hypothesis testing, find an issue that democrats support more than republicans with p < 0.013. Using hypothesis testing, find an issue that republicans support more than democrats with p < 0.014. Using hypothesis testing, find an issue where the difference between republicans and democrats has p > 0.1 (i.e. there may not be much of a difference)Note that this data will involve *2 sample* t-tests, because you're comparing averages across two groups (republicans and democrats) rather than a single group against a null hypothesis.Stretch goals:1. Refactor your code into functions so it's easy to rerun with arbitrary variables2. Apply hypothesis testing to your personal project data (for the purposes of this notebook you can type a summary of the hypothesis you formed and tested)
###Code
### YOUR CODE STARTS HERE
import scipy.stats
from scipy.stats import ttest_ind, ttest_ind_from_stats, ttest_rel
import pandas as pd
import numpy as np
from google.colab import files
uploaded = files.upload()
df = pd.read_csv('house-votes-84.data', header=None)
print(df.shape)
df.head()
#Adding column headers specified in attribute information at: https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records
df.columns = ["party", "handicapped_infants", "water_project_cost_sharing", "adoption_of_the_budget_resolution","physician_fee_freeze","el_salvador_aid","religious_groups_in_schools","anti_satellite_test_ban","aid_to_nicaraguan_contras","mx_missile","immigration","synfuels_corporation_cutback","education_spending","superfund_right_to_sue","crime","duty_free_exports","export_administration_act_south_africa" ]
df.head()
df.shape
#Check missing values
df.isna().sum().sum()
#"?" counts as a value, yet provides no information.
###Output
_____no_output_____
###Markdown
Make voting records numeric
###Code
df = df.replace({'?':np.NaN, 'n':0, 'y':1})
print(df.shape)
df.head()
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Defining Republicans
###Code
rep = df[df.party == "republican"]
print(rep.shape)
rep.head()
###Output
(168, 17)
###Markdown
Defining Democrats
###Code
dem = df[df.party == "democrat"]
print(dem.shape)
dem.head()
df.party.value_counts()
###Output
_____no_output_____
###Markdown
1 Sample T-test
###Code
from scipy.stats import ttest_1samp
#used for 1 sample
dir(scipy.stats)
#used for more than 1 sample
from scipy.stats import ttest_ind, ttest_ind_from_stats, ttest_rel
###Output
_____no_output_____
###Markdown
T-test part 1: null hypothesis There is 0 support for this bill among Republicans in the House.Because there is one sample null hypothesis can be specified. T-test part 2: alternative hypothesisThere is non-0 support for this bill. (There is some support among Republicans in the house.)
###Code
print (rep["water_project_cost_sharing"].mean())
ttest_1samp(rep["water_project_cost_sharing"], 0, nan_policy="omit")
#Replace 0 with .5 to make the null hypothesis Republics are split evenly.
####would like to review https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_1samp.html
# Due to a t-statistic of 12.29, and a p-value of 2.53(**-24), reject
#the null hypothesis that there is 0 Republican support for the water_project
#cost sharing.
#T-test where the null hypothesis is Republican support is evenly divided.
# Alternative: Republican support is not even.
ttest_1samp(rep["water_project_cost_sharing"], .5, nan_policy="omit")
###Output
_____no_output_____
###Markdown
T-test part 3: confidence level: 95%Reject the null hypothesis when P-value < (1-confidence level)
###Code
##SAVING FOR LATER
# style.use('fivethirtyeight')
###Output
_____no_output_____
###Markdown
2 Sample T-test T-test part 1: null hypothesis - compares meansDemocrats support more than Republicans
###Code
print ("Rep mean is", rep["water_project_cost_sharing"].mean())
print ("Dem mean is" ,dem["water_project_cost_sharing"].mean())
ttest_ind(rep["water_project_cost_sharing"], dem["water_project_cost_sharing"], nan_policy="omit")
#write a function for 2 Sample T-tests on votes
def t_test_votes(a, b):
ttest_ind(rep["water_project_cost_sharing"], dem["water_project_cost_sharing"], nan_policy="omit")
dem.describe()
def ttest(df1, df2, column_name):
a = df1[column_name]
b = df2[column_name]
statistic, pvalue = ttest_ind(a, b, nan_policy="omit")
print("The t-statistic is" , statistic, "The p-value is " , pvalue)
ttest (dem, rep, "handicapped_infants")
print(df.columns)
def ttest(df1, df2):
for x in (df1.columns, df2.columns):
statistic, pvalue = ttest_ind(df1[x], df2[x], nan_policy="omit")
return(x, "T-statistic is ", statistic, "P value is " , pvalue)
#see code challenge 2 where you append to a dictionary
ttest(dem, rep)
print(df.columns)
#function to look at all of the bills
#df.columns
#def basic_statistics(bill):
#ttest_ind(rep["water_project_cost_sharing"], dem["water_project_cost_sharing"], nan_policy="omit")
###Output
_____no_output_____ |
week05_nlp/Copy_of_seminar.ipynb | ###Markdown
Seminar part 1: Fun with Word EmbeddingsToday we gonna play with word embeddings: train our own little embedding, load one from gensim model zoo and use it to visualize text corpora.This whole thing is gonna happen on top of embedding dataset.__Requirements:__ `pip install --upgrade nltk gensim bokeh` , but only if you're running locally.
###Code
!pip install --upgrade nltk gensim bokeh
# download the data:
!wget https://www.dropbox.com/s/obaitrix9jyu84r/quora.txt?dl=1 -O ./quora.txt
# alternative download link: https://yadi.sk/i/BPQrUu1NaTduEw
import numpy as np
data = list(open("./quora.txt"))
data[50]
data[::1000]
###Output
_____no_output_____
###Markdown
__Tokenization:__ a typical first step for an nlp task is to split raw data into words.The text we're working with is in raw format: with all the punctuation and smiles attached to some words, so a simple str.split won't do.Let's use __`nltk`__ - a library that handles many nlp tasks like tokenization, stemming or part-of-speech tagging.
###Code
data[50]
from nltk.tokenize import WordPunctTokenizer
tokenizer = WordPunctTokenizer()
print(tokenizer.tokenize(data[50]))
# TASK: lowercase everything and extract tokens with tokenizer.
# data_tok should be a list of lists of tokens for each line in data.
data_tok = [
tokenizer.tokenize(line.lower()) for line in data
]
assert all(isinstance(row, (list, tuple)) for row in data_tok), "please convert each line into a list of tokens (strings)"
assert all(all(isinstance(tok, str) for tok in row) for row in data_tok), "please convert each line into a list of tokens (strings)"
is_latin = lambda tok: all('a' <= x.lower() <= 'z' for x in tok)
assert all(map(lambda l: not is_latin(l) or l.islower(), map(' '.join, data_tok))), "please make sure to lowercase the data"
print([' '.join(row) for row in data_tok[:2]])
###Output
["can i get back with my ex even though she is pregnant with another guy ' s baby ?", 'what are some ways to overcome a fast food addiction ?']
###Markdown
__Word vectors:__ as the saying goes, there's more than one way to train word embeddings. There's Word2Vec and GloVe with different objective functions. Then there's fasttext that uses character-level models to train word embeddings. The choice is huge, so let's start someplace small: __gensim__ is another nlp library that features many vector-based models incuding word2vec.
###Code
from gensim.models import Word2Vec
model = Word2Vec(data_tok,
size=32, # embedding vector size
min_count=5, # consider words that occured at least 5 times
window=5).wv # define context as a 5-word window around the target word
# now you can get word vectors !
model.get_vector('anything')
(model.get_vector('bread') * model.get_vector('break')).sum() / (
np.linalg.norm(model.get_vector('bread'))
* np.linalg.norm(model.get_vector('break'))
)
# or query similar words directly. Go play with it!
model.most_similar('dumb')
###Output
_____no_output_____
###Markdown
Using pre-trained modelTook it a while, huh? Now imagine training life-sized (100~300D) word embeddings on gigabytes of text: wikipedia articles or twitter posts. Thankfully, nowadays you can get a pre-trained word embedding model in 2 lines of code (no sms required, promise).
###Code
import gensim.downloader as api
api.info()['models'].keys()
import gensim.downloader as api
model = api.load('glove-twitter-100')
model.most_similar(positive=["spock", "starwars"], negative=["startrek"])
model.most_similar(positive=["coder", "money"], negative=["brain"])
###Output
_____no_output_____
###Markdown
Visualizing word vectorsOne way to see if our vectors are any good is to plot them. Thing is, those vectors are in 30D+ space and we humans are more used to 2-3D.Luckily, we machine learners know about __dimensionality reduction__ methods.Let's use that to plot 1000 most frequent words
###Code
words = sorted(model.vocab.keys(),
key=lambda word: model.vocab[word].count,
reverse=True)[:1000]
print(words[::100])
# for each word, compute it's vector with model
word_vectors = np.array(
[model.get_vector(w) for w in words]
)
word_vectors.shape
assert isinstance(word_vectors, np.ndarray)
assert word_vectors.shape == (len(words), 100)
assert np.isfinite(word_vectors).all()
###Output
_____no_output_____
###Markdown
Linear projection: PCAThe simplest linear dimensionality reduction method is __P__rincipial __C__omponent __A__nalysis.In geometric terms, PCA tries to find axes along which most of the variance occurs. The "natural" axes, if you wish.Under the hood, it attempts to decompose object-feature matrix $X$ into two smaller matrices: $W$ and $\hat W$ minimizing _mean squared error_:$$\|(X W) \hat{W} - X\|^2_2 \to_{W, \hat{W}} \min$$- $X \in \mathbb{R}^{n \times m}$ - object matrix (**centered**);- $W \in \mathbb{R}^{m \times d}$ - matrix of direct transformation;- $\hat{W} \in \mathbb{R}^{d \times m}$ - matrix of reverse transformation;- $n$ samples, $m$ original dimensions and $d$ target dimensions;
###Code
from sklearn.decomposition import PCA
# map word vectors onto 2d plane with PCA. Use good old sklearn api (fit, transform)
# after that, normalize vectors to make sure they have zero mean and unit variance
word_vectors_pca = PCA(2).fit_transform(word_vectors)
# and maybe MORE OF YOUR CODE here :)
assert word_vectors_pca.shape == (len(word_vectors), 2), "there must be a 2d vector for each word"
assert max(abs(word_vectors_pca.mean(0))) < 1e-5, "points must be zero-centered"
# assert max(abs(1.0 - word_vectors_pca.std(0))) < 1e-2, "points must have unit variance"
###Output
_____no_output_____
###Markdown
Let's draw it!
###Code
import bokeh.models as bm, bokeh.plotting as pl
from bokeh.io import output_notebook
output_notebook()
def draw_vectors(x, y, radius=10, alpha=0.25, color='blue',
width=600, height=400, show=True, **kwargs):
""" draws an interactive plot for data points with auxilirary info on hover """
if isinstance(color, str): color = [color] * len(x)
data_source = bm.ColumnDataSource({ 'x' : x, 'y' : y, 'color': color, **kwargs })
fig = pl.figure(active_scroll='wheel_zoom', width=width, height=height)
fig.scatter('x', 'y', size=radius, color='color', alpha=alpha, source=data_source)
fig.add_tools(bm.HoverTool(tooltips=[(key, "@" + key) for key in kwargs.keys()]))
if show: pl.show(fig)
return fig
draw_vectors(word_vectors_pca[:, 0], word_vectors_pca[:, 1], token=words)
# hover a mouse over there and see if you can identify the clusters
###Output
_____no_output_____
###Markdown
Visualizing neighbors with t-SNEPCA is nice but it's strictly linear and thus only able to capture coarse high-level structure of the data.If we instead want to focus on keeping neighboring points near, we could use TSNE, which is itself an embedding method. Here you can read __[more on TSNE](https://distill.pub/2016/misread-tsne/)__.
###Code
from sklearn.manifold import TSNE
# map word vectors onto 2d plane with TSNE. hint: use verbose=100 to see what it's doing.
# normalize them as just lke with pca
word_tsne = TSNE(2).fit_transform(word_vectors)
draw_vectors(word_tsne[:, 0], word_tsne[:, 1], color='green', token=words)
###Output
_____no_output_____
###Markdown
Visualizing phrasesWord embeddings can also be used to represent short phrases. The simplest way is to take __an average__ of vectors for all tokens in the phrase with some weights.This trick is useful to identify what data are you working with: find if there are any outliers, clusters or other artefacts.Let's try this new hammer on our data!
###Code
def get_phrase_embedding(phrase):
"""
Convert phrase to a vector by aggregating it's word embeddings. See description above.
"""
# 1. lowercase phrase
# 2. tokenize phrase
# 3. average word vectors for all words in tokenized phrase
# skip words that are not in model's vocabulary
# if all words are missing from vocabulary, return zeros
vector = np.zeros([model.vector_size], dtype='float32')
# YOUR CODE
return vector
vector = get_phrase_embedding("I'm very sure. This never happened to me before...")
assert np.allclose(vector[::10],
np.array([ 0.31807372, -0.02558171, 0.0933293 , -0.1002182 , -1.0278689 ,
-0.16621883, 0.05083408, 0.17989802, 1.3701859 , 0.08655966],
dtype=np.float32))
# let's only consider ~5k phrases for a first run.
chosen_phrases = data[::len(data) // 1000]
# compute vectors for chosen phrases
phrase_vectors = # YOUR CODE
assert isinstance(phrase_vectors, np.ndarray) and np.isfinite(phrase_vectors).all()
assert phrase_vectors.shape == (len(chosen_phrases), model.vector_size)
# map vectors into 2d space with pca, tsne or your other method of choice
# don't forget to normalize
phrase_vectors_2d = TSNE(verbose=1000).fit_transform(phrase_vectors)
phrase_vectors_2d = (phrase_vectors_2d - phrase_vectors_2d.mean(axis=0)) / phrase_vectors_2d.std(axis=0)
draw_vectors(phrase_vectors_2d[:, 0], phrase_vectors_2d[:, 1],
phrase=[phrase[:50] for phrase in chosen_phrases],
radius=20,)
###Output
_____no_output_____
###Markdown
Finally, let's build a simple "similar question" engine with phrase embeddings we've built.
###Code
# compute vector embedding for all lines in data
data_vectors = np.array([get_phrase_embedding(l) for l in data])
def find_nearest(query, k=10):
"""
given text line (query), return k most similar lines from data, sorted from most to least similar
similarity should be measured as cosine between query and line embedding vectors
hint: it's okay to use global variables: data and data_vectors. see also: np.argpartition, np.argsort
"""
# YOUR CODE
return <YOUR CODE: top-k lines starting from most similar>
results = find_nearest(query="How do i enter the matrix?", k=10)
print(''.join(results))
assert len(results) == 10 and isinstance(results[0], str)
assert results[0] == 'How do I get to the dark web?\n'
assert results[3] == 'What can I do to save the world?\n'
find_nearest(query="How does Trump?", k=10)
find_nearest(query="Why don't i ask a question myself?", k=10)
###Output
_____no_output_____ |
Codes/DataAnnotate.ipynb | ###Markdown
1. Instal and Import Packages
###Code
!pip install pydub
! pip install ffmpeg-python
import cv2
import numpy as np
import os
import pandas as pd
import platform
import sqlalchemy
#import mysql.connector
import requests
import json
from pandas.io.json import json_normalize
import time
import librosa
import matplotlib.pyplot as plt
import IPython.display as ipd
import librosa
import librosa.display
import warnings
warnings.filterwarnings("ignore")
from pydub import AudioSegment
from moviepy.editor import *
import subprocess
import os
import sys
#import Pillow
import ffmpeg
import sys
import json
from pprint import pprint # for printing Python dictionaries in a human-readable way
###Output
Imageio: 'ffmpeg-linux64-v3.3.1' was not found on your computer; downloading it now.
Try 1. Download from https://github.com/imageio/imageio-binaries/raw/master/ffmpeg/ffmpeg-linux64-v3.3.1 (43.8 MB)
Downloading: 8192/45929032 bytes (0.0%)2662400/45929032 bytes (5.8%)5431296/45929032 bytes (11.8%)8626176/45929032 bytes (18.8%)11370496/45929032 bytes (24.8%)14622720/45929032 bytes (31.8%)17645568/45929032 bytes (38.4%)20668416/45929032 bytes (45.0%)23912448/45929032 bytes (52.1%)26804224/45929032 bytes (58.4%)30015488/45929032 bytes (65.4%)32874496/45929032 bytes (71.6%)35651584/45929032 bytes (77.6%)38494208/45929032 bytes (83.8%)41082880/45929032 bytes (89.4%)44294144/45929032 bytes (96.4%)45929032/45929032 bytes (100.0%)
Done
File saved as /root/.imageio/ffmpeg/ffmpeg-linux64-v3.3.1.
###Markdown
2. CONNECT TO COLAB FOLDERS
###Code
pd.set_option('display.max_colwidth', -1)
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
directory = "/content/drive/My Drive/thesis_work/audio/"
weather_csv = "/content/drive/My Drive/thesis_work/weather_csv/"
trim_directory = "/content/drive/My Drive/thesis_work/trim/"
orig_image_dir = "/content/drive/My Drive/thesis_work/orig/"
os.chdir(directory)
from datetime import datetime
now = datetime.now()
today_dt=now.strftime("%Y%m%d%H")
###Output
_____no_output_____
###Markdown
3. LOCATE FOLDER WITH VIDEO
###Code
Vid_fldr1 = '/content/drive/My Drive/thesis_work/FlBot_Video_112721_021822'
Vid_fldr2 = '/content/drive/My Drive/thesis_work/audio/PhD_Thesis/data/video'
Vid_fldr3 = '/content/drive/My Drive/thesis_work/audio/PhD_Thesis/data/amzn_video'
Vid_fldr4 = '/content/drive/My Drive/thesis_work/audio/PhD_Thesis/data/video_2020_21'
###Output
_____no_output_____
###Markdown
3.1 First Group of Folder
###Code
import os
from glob import glob
# Iterate over the list of filepaths & remove each file.
mpg_files = []
mp4 = glob(Vid_fldr1+ '/*.mp4')
for j in mp4:
try:
#os.remove(j)
mpg_files.append(j)
except OSError:
print("Error while adding file")
df_vid_list1 = pd.DataFrame(mpg_files,columns = ['Orig_vid_loc'] )
df_vid_list1.head(2)
Final_df1 = pd.DataFrame()
#for row in df_vid_list2.head(5).itertuples():
for row in df_vid_list1.itertuples():
vid= row.Orig_vid_loc
#print(vid)
vid_Json1 = (ffmpeg.probe(vid)["format"])
vid_Json2 = (ffmpeg.probe(vid)["streams"])
with open('data.json', 'w') as f:
json.dump(vid_Json1, f)
with open('data1.json', 'w') as f:
json.dump(vid_Json2, f)
df = pd.read_json('data.json')
df1 = pd.read_json('data1.json')
df_video = df1.loc[df1['codec_type'] == 'video']
df_audio = df1.loc[df1['codec_type'] == 'audio']
df= df[['filename','tags','duration','size']]
df_video = df_video[['width','height','nb_frames','duration_ts','duration','avg_frame_rate']]
df_audio = df_audio[['sample_rate','channels','nb_frames','channel_layout','max_bit_rate']]
#df1= df1[['nb_frames','width','height']]
#df= df[['filename','tags','duration','size']]
time= df.iloc[1,1]
fname= df.iloc[0,0]
durn = df_video.iloc[0,4]
vid_frm_rt=df_video.iloc[0,5]
#vid_frm_ttl=df_video.iloc[0,3]
size = df.iloc[0,3]
wdth= df_video.iloc[0,1]
hgth = df_video.iloc[0,0]
vid_frames = df_video.iloc[0,2]
aud_smpl_rt = df_audio.iloc[0,0]
aud_frames = df_audio.iloc[0,2]
aud_chnl = df_audio.iloc[0,3]
aud_max_bit_rate = df_audio.iloc[0,4]
#print(wdth)
#print(hgth)
#video_meta_df2 =video_meta_df1
video_meta_df2 = pd.DataFrame([[fname,time,durn,size,wdth,hgth,vid_frames,vid_frm_rt,aud_smpl_rt,aud_frames,aud_chnl,aud_max_bit_rate]]
,columns=['FileName','creation_time','Vid_lngth_sec','Vid_size_KB','Frm_Width','Frm_Height','Tot_Img_Frames',
'Vid_frm_rate','Aud_sample_rate','Tot_aud_Frames','Aud_Chanel','Aud_max_bit_rate'])
Final_df1 = pd.concat([Final_df1,video_meta_df2], ignore_index=True)
#Final_df2 =Final_df2[['FileName','creation_time']]
#Final_df2 =Final_df2[['FileName','creation_time']]
Final_df1=Final_df1.drop_duplicates()
Final_df1.to_csv('Final_df1a_'+today_dt+'.csv',index=False)
Final_df1.sample(2)
#df1 = pd.read_csv('Final_df2022030802.csv')
df1.head(2)
###Output
_____no_output_____
###Markdown
3.2 Second Group of Folder
###Code
import os
from glob import glob
# Iterate over the list of filepaths & remove each file.
mpg_files2 = []
mp4 = glob(Vid_fldr2+ '/*.mp4')
for j in mp4:
try:
#os.remove(j)
mpg_files2.append(j)
except OSError:
print("Error while adding file")
df_vid_list2 = pd.DataFrame(mpg_files2,columns = ['Orig_vid_loc'] )
#print(df_vid_list2.shape)
Final_df2 = pd.DataFrame()
#for row in df_vid_list2.head(5).itertuples():
for row in df_vid_list2.itertuples():
vid= row.Orig_vid_loc
#print(vid)
vid_Json1 = (ffmpeg.probe(vid)["format"])
vid_Json2 = (ffmpeg.probe(vid)["streams"])
with open('data.json', 'w') as f:
json.dump(vid_Json1, f)
with open('data1.json', 'w') as f:
json.dump(vid_Json2, f)
df = pd.read_json('data.json')
df1 = pd.read_json('data1.json')
df_video = df1.loc[df1['codec_type'] == 'video']
df_audio = df1.loc[df1['codec_type'] == 'audio']
df= df[['filename','tags','duration','size']]
df_video = df_video[['width','height','nb_frames','duration_ts','duration','avg_frame_rate']]
df_audio = df_audio[['sample_rate','channels','nb_frames','channel_layout','max_bit_rate']]
#df1= df1[['nb_frames','width','height']]
#df= df[['filename','tags','duration','size']]
time= df.iloc[1,1]
fname= df.iloc[0,0]
durn = df_video.iloc[0,4]
vid_frm_rt=df_video.iloc[0,5]
#vid_frm_ttl=df_video.iloc[0,3]
size = df.iloc[0,3]
wdth= df_video.iloc[0,1]
hgth = df_video.iloc[0,0]
vid_frames = df_video.iloc[0,2]
aud_smpl_rt = df_audio.iloc[0,0]
aud_frames = df_audio.iloc[0,2]
aud_chnl = df_audio.iloc[0,3]
aud_max_bit_rate = df_audio.iloc[0,4]
#print(wdth)
#print(hgth)
#video_meta_df2 =video_meta_df1
video_meta_df2 = pd.DataFrame([[fname,time,durn,size,wdth,hgth,vid_frames,vid_frm_rt,aud_smpl_rt,aud_frames,aud_chnl,aud_max_bit_rate]]
,columns=['FileName','creation_time','Vid_lngth_sec','Vid_size_KB','Frm_Width','Frm_Height','Tot_Img_Frames',
'Vid_frm_rate','Aud_sample_rate','Tot_aud_Frames','Aud_Chanel','Aud_max_bit_rate'])
Final_df2 = pd.concat([Final_df2,video_meta_df2], ignore_index=True)
#Final_df2 =Final_df2[['FileName','creation_time']]
#Final_df2 =Final_df2[['FileName','creation_time']]
Final_df2=Final_df2.drop_duplicates()
Final_df2.to_csv('Final_df2a_'+today_dt+'.csv',index=False)
Final_df2.sample(2)
#df1= pd.read_csv('Final_df22 02 2030811.csv')
#df2= pd.read_csv('Final_df22022030811.csv')
#df2.head(2)
###Output
_____no_output_____
###Markdown
3.3 Third Group of Folder
###Code
import os
from glob import glob
# Iterate over the list of filepaths & remove each file.
mpg_files3 = []
mp4 = glob(Vid_fldr3+ '/*.mp4')
for j in mp4:
try:
#os.remove(j)
mpg_files3.append(j)
except OSError:
print("Error while adding file")
df_vid_list3 = pd.DataFrame(mpg_files3,columns = ['Orig_vid_loc'] )
print(df_vid_list3.shape)
###Output
(193, 1)
###Markdown
###Code
Final_df3 = pd.DataFrame()
#for row in df_vid_list2.head(5).itertuples():
for row in df_vid_list3.itertuples():
vid= row.Orig_vid_loc
#print(vid)
vid_Json1 = (ffmpeg.probe(vid)["format"])
vid_Json2 = (ffmpeg.probe(vid)["streams"])
with open('data.json', 'w') as f:
json.dump(vid_Json1, f)
with open('data1.json', 'w') as f:
json.dump(vid_Json2, f)
df = pd.read_json('data.json')
df1 = pd.read_json('data1.json')
df_video = df1.loc[df1['codec_type'] == 'video']
df_audio = df1.loc[df1['codec_type'] == 'audio']
df= df[['filename','tags','duration','size']]
df_video = df_video[['width','height','nb_frames','duration_ts','duration','avg_frame_rate']]
df_audio = df_audio[['sample_rate','channels','nb_frames','channel_layout','max_bit_rate']]
#df1= df1[['nb_frames','width','height']]
#df= df[['filename','tags','duration','size']]
time= df.iloc[1,1]
fname= df.iloc[0,0]
durn = df_video.iloc[0,4]
vid_frm_rt=df_video.iloc[0,5]
#vid_frm_ttl=df_video.iloc[0,3]
size = df.iloc[0,3]
wdth= df_video.iloc[0,1]
hgth = df_video.iloc[0,0]
vid_frames = df_video.iloc[0,2]
aud_smpl_rt = df_audio.iloc[0,0]
aud_frames = df_audio.iloc[0,2]
aud_chnl = df_audio.iloc[0,3]
aud_max_bit_rate = df_audio.iloc[0,4]
#print(wdth)
#print(hgth)
#video_meta_df2 =video_meta_df1
video_meta_df2 = pd.DataFrame([[fname,time,durn,size,wdth,hgth,vid_frames,vid_frm_rt,aud_smpl_rt,aud_frames,aud_chnl,aud_max_bit_rate]]
,columns=['FileName','creation_time','Vid_lngth_sec','Vid_size_KB','Frm_Width','Frm_Height','Tot_Img_Frames',
'Vid_frm_rate','Aud_sample_rate','Tot_aud_Frames','Aud_Chanel','Aud_max_bit_rate'])
Final_df3 = pd.concat([Final_df3,video_meta_df2], ignore_index=True)
#Final_df2 =Final_df2[['FileName','creation_time']]
#Final_df2 =Final_df2[['FileName','creation_time']]
Final_df3=Final_df3.drop_duplicates()
Final_df3.to_csv('Final_df3a_'+today_dt+'.csv',index=False)
Final_df3.sample(2)
#print(Final_df3.shape)
#df3 = pd.read_csv('Final_df32022030811.csv')
#df3.head(2)
###Output
_____no_output_____
###Markdown
3.4 Fourth Group of Data
###Code
import os
from glob import glob
# Iterate over the list of filepaths & remove each file.
mpg4_files = []
mp4 = glob(Vid_fldr4+ '/*.mp4')
for j in mp4:
try:
#os.remove(j)
mpg4_files.append(j)
except OSError:
print("Error while adding file")
df_vid_list4 = pd.DataFrame(mpg4_files,columns = ['Orig_vid_loc'] )
print(df_vid_list4.shape)
Final_df4 = pd.DataFrame()
#for row in df_vid_list2.head(5).itertuples():
for row in df_vid_list3.itertuples():
vid= row.Orig_vid_loc
#print(vid)
vid_Json1 = (ffmpeg.probe(vid)["format"])
vid_Json2 = (ffmpeg.probe(vid)["streams"])
with open('data.json', 'w') as f:
json.dump(vid_Json1, f)
with open('data1.json', 'w') as f:
json.dump(vid_Json2, f)
df = pd.read_json('data.json')
df1 = pd.read_json('data1.json')
df_video = df1.loc[df1['codec_type'] == 'video']
df_audio = df1.loc[df1['codec_type'] == 'audio']
df= df[['filename','tags','duration','size']]
df_video = df_video[['width','height','nb_frames','duration_ts','duration','avg_frame_rate']]
df_audio = df_audio[['sample_rate','channels','nb_frames','channel_layout','max_bit_rate']]
#df1= df1[['nb_frames','width','height']]
#df= df[['filename','tags','duration','size']]
time= df.iloc[1,1]
fname= df.iloc[0,0]
durn = df_video.iloc[0,4]
vid_frm_rt=df_video.iloc[0,5]
#vid_frm_ttl=df_video.iloc[0,3]
size = df.iloc[0,3]
wdth= df_video.iloc[0,1]
hgth = df_video.iloc[0,0]
vid_frames = df_video.iloc[0,2]
aud_smpl_rt = df_audio.iloc[0,0]
aud_frames = df_audio.iloc[0,2]
aud_chnl = df_audio.iloc[0,3]
aud_max_bit_rate = df_audio.iloc[0,4]
#print(wdth)
#print(hgth)
#video_meta_df2 =video_meta_df1
video_meta_df2 = pd.DataFrame([[fname,time,durn,size,wdth,hgth,vid_frames,vid_frm_rt,aud_smpl_rt,aud_frames,aud_chnl,aud_max_bit_rate]]
,columns=['FileName','creation_time','Vid_lngth_sec','Vid_size_KB','Frm_Width','Frm_Height','Tot_Img_Frames',
'Vid_frm_rate','Aud_sample_rate','Tot_aud_Frames','Aud_Chanel','Aud_max_bit_rate'])
Final_df4 = pd.concat([Final_df4,video_meta_df2], ignore_index=True)
#Final_df2 =Final_df2[['FileName','creation_time']]
#Final_df2 =Final_df2[['FileName','creation_time']]
Final_df4=Final_df4.drop_duplicates()
Final_df4.to_csv('Final_df4a_'+today_dt+'.csv',index=False)
Final_df4.sample(2)
#df4 = pd.read_csv('Final_df4_2022030910.csv')
#df4.head(3)
###Output
_____no_output_____
###Markdown
4. Annotate all Videos with 5 Minute TimeStamps
###Code
Final_df = pd.concat([Final_df1,Final_df2,Final_df3,Final_df4], ignore_index=True)
All_video_df.shape
#Final_df =All_video_df[['FileName','creation_time']]
#Final_df = All_video_df
#Final_df1 =Final_df1[['FileName','creation_time']]
Final_df=Final_df.drop_duplicates()
Final_df.to_csv('Final_dfa_'+today_dt+'.csv',index=False)
#Final_df=Final_df.drop_duplicates()
#Final_df.head(2)
vid_strftime = "%Y-%m-%dT%H:%M"
vid_strftime = "%Y-%m-%dT%H:%M"
Final_df['Date']=Final_df['creation_time'].str[:-11]
Final_df['Date'] = pd.to_datetime(Final_df['Date'],format=vid_strftime)
Final_df['Date']=Final_df['Date'].dt.tz_localize('utc').dt.tz_convert('US/Eastern')
Final_df['Date'] = pd.to_datetime(Final_df['Date'].dt.tz_localize(None))
#Final_df['Date'] = Final_df['Date'].dt.tz_localize(None)
#Final_df.tail(2)
#sample_mp4 = '/content/drive/My Drive/thesis_work/FlBot_Video_112721_021822/Rec_20220218_211712_151_M.mp4'
#from IPython.display import HTML
#from base64 import b64encode
#mp4 = open(sample_mp4,'rb').read()
#data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
#HTML("""
#<video width=600 controls>
# <source src="%s" type="video/mp4">
#</video>
#""" % data_url)
Final_df['fivemin_ts'] =Final_df['Date'].dt.round('5min')
Final_df['fivemin_ts']=Final_df['fivemin_ts'].apply(lambda x: str(x))
Final_df['fivemin_ts'] =Final_df['fivemin_ts'].str[:16]
#Final_df.head()
df_fv_min = Final_df.groupby('fivemin_ts').apply(lambda x: x['FileName'].unique())
df_fv_min = df_fv_min.apply(pd.Series)
df_fv_min['fivemin_ts'] = df_fv_min.index
df_fv_min.reset_index(drop=True, inplace=True)
df_fv_min["time_id"] = df_fv_min.index + 1
df_vid_list = df_fv_min[['time_id','fivemin_ts',0]]
df_all_fv_min = df_fv_min[['time_id','fivemin_ts',0, 1, 2, 3, 4, 5]]
df_all_fv_min.to_csv('all_vid_fv_min_'+today_dt+'.csv',index=False)
df_all_fv_min.to_csv('all_vid_fv_min_'+today_dt+'.csv',index=False)
#df_all_fv_min.head(2)
df_vid_list = df_vid_list.rename(columns={0: "Fv_min_src_Video"})
#Final_df.head(2)
df_final_vid_lst = pd.merge(Final_df, df_vid_list, how='inner', left_on = 'FileName', right_on = 'Fv_min_src_Video')
#df_final_vid_lst.head()
df_final_vid_lst = df_final_vid_lst[['time_id', 'FileName', 'creation_time', 'Date', 'fivemin_ts_x' ]]
df_final_vid_lst = df_final_vid_lst.rename(columns={'fivemin_ts_x': "Fivemin_Time",'Date':'Orig_Video_Date','creation_time':'UTC_creation_time'})
df_final_vid_lst.to_csv('final_vid_lst_'+today_dt+'.csv',index=False)
#df_final_vid_lst.head(2)
#df_final_vid_lst.sort_values(by=['time_id']).head(2)
#df_final_vid_lst.sort_values(by=['time_id']).tail(5)
df_final_vid_lst.to_csv('df_final_vid_lst_' + today_dt+'.csv',index=False)
#sample_mp4='/content/drive/My Drive/thesis_work/audio/PhD_Thesis/data/video/Rec_20200328_174707_151_M.mp4'
#from IPython.display import HTML
#from base64 import b64encode
#mp4 = open(sample_mp4,'rb').read()
#data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
#HTML("""
#<video width=600 controls>
# <source src="%s" type="video/mp4">
#</video>
#""" % data_url)
from datetime import date,datetime,timedelta
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
import numpy as np
import pandas as pd
# Handle date time conversions between pandas and matplotlib
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
#df_final_vid_lst.head(2)
df_final_vid_lst['Time_Index'] = df_final_vid_lst['Orig_Video_Date']
df_final_vid_lst['Day']=df_final_vid_lst['Orig_Video_Date'].apply(lambda x: str(x))
df_final_vid_lst['Day'] =df_final_vid_lst['Day'].str[:10]
df_final_vid_lst['Month']=df_final_vid_lst['Orig_Video_Date'].apply(lambda x: str(x))
df_final_vid_lst['Month'] =df_final_vid_lst['Day'].str[:7]
df_mnthly_video = df_final_vid_lst.groupby('Month').size()
#type(df_mnthly_video)
#df_monthly_video = pd.DataFrame(df_mnthly_video)
#df_monthly_video
#df_final_vid_lst['Month'] = df_monthly.index
#df_final_vid_lst['Month'] = pd.to_datetime(df_final_vid_lst['Orig_Video_Date'], format='%y%m%d')
#df_monthly.reset_index(level=0, inplace=True)
#df_final_vid_lst['Month'] = pd.DatetimeIndex(df_final_vid_lst['Orig_Video_Date']).month
#df_monthly_video = df_final_vid_lst.groupby('Time_Index', as_index = False)['Month'].count()
#df_monthly_video
#df_daily_video.head(2)
#df_final_vid_lst['Month'] = df_daily_video.Orig_Video_Date.apply(lambda x: x.strftime('%Y%m%d')).astype(int)
#df_final_vid_lst.set_index('Time_Index', inplace=True)
#df_final_vid_lst.head()
#df_video_monthly = df_final_vid_lst.resample('M').sum()
#df_video_monthly.head(2)
###Output
_____no_output_____
###Markdown
5. Weather Data
###Code
import glob
path = r'/content/drive/My Drive/thesis_work/audio/PhD_Thesis/data/weather_data'
all_files = glob.glob(path + "/*.txt")
df_files = (pd.read_csv(f) for f in all_files)
df_weather = pd.concat(df_files, ignore_index=True)
df_weather.head(2)
###Output
_____no_output_____ |
teachopencadd/talktorials/T001_query_chembl/talktorial.ipynb | ###Markdown
T001 · Compound data acquisition (ChEMBL)Authors:- Svetlana Leng, CADD seminar 2017, Volkamer lab, Charité/FU Berlin - Paula Junge, CADD seminar 2018, Volkamer lab, Charité/FU Berlin- Dominique Sydow, 2019-2020, [Volkamer lab, Charité](https://volkamerlab.org/)- Andrea Volkamer, 2020, [Volkamer lab, Charité](https://volkamerlab.org/)- Yonghui Chen, 2020, [Volkamer lab, Charité](https://volkamerlab.org/) __Talktorial T001__: This talktorial is part of the TeachOpenCADD pipeline described in the [first TeachOpenCADD paper](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-019-0351-x), comprising of talktorials T001-T010. Aim of this talktorialIn this notebook, we will learn more about the ChEMBL database and how to extract data from ChEMBL, i.e. (compound, activity data) pairs for a target of interest. These data sets can be used for many cheminformatics tasks, such as similarity search, clustering or machine learning.Our work here will include finding compounds which were tested against a certain target and filtering available bioactivity data. Contents in *Theory** ChEMBL database * ChEMBL web services * ChEMBL webresource client* Compound activity measures * IC50 and Ki measure * pIC50 and pKi value Contents in *Practical* **Goal: Get a list of compounds with bioactivity data for a given target*** Connect to ChEMBL database* Get target data (example: EGFR kinase) * Fetch and download target data * Select target ChEMBL ID* Get bioactivity data * Fetch and download bioactivity data for target * Preprocess and filter bioactivity data* Get compound data * Fetch and download compound data * Preprocess and filter compound data* Output bioactivity-compound data * Merge bioactivity and compound data, and add pKi values * Draw molecules with highest pKi * Write output file References* ChEMBL bioactivity database: [Gaulton *et al.*, Nucleic Acids Res. (2017), 45(Database issue), D945–D954](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5210557/)* ChEMBL web services: [Davies *et al.*, Nucleic Acids Res. (2015), 43, 612-620](https://academic.oup.com/nar/article/43/W1/W612/2467881) * [ChEMBL web-interface](https://www.ebi.ac.uk/chembl/)* GitHub [ChEMBL webrescource client](https://github.com/chembl/chembl_webresource_client)* The EBI RDF platform: [Jupp *et al.*, Bioinformatics (2014), 30(9), 1338-9](https://www.ncbi.nlm.nih.gov/pubmed/24413672)* Info on half maximal inhibitory concentration: [(p)IC50](https://en.wikipedia.org/wiki/IC50)* [UniProt website](https://www.uniprot.org/) Theory ChEMBL database>"ChEMBL is a manually curated database of bioactive molecules with drug-like properties. It brings together chemical, bioactivity and genomic data to aid the translation of genomic information into effective new drugs." ([ChEMBL website](https://www.ebi.ac.uk/chembl/))* Open large-scale bioactivity database* **Current data content (as of 09.2020, ChEMBL 27):** * \>1.9 million distinct compounds * \>16 million activity values * Assays are mapped to ~13,000 targets* **Data sources** include scientific literature, PubChem bioassays, Drugs for Neglected Diseases Initiative (DNDi), BindingDB database, ...* ChEMBL data can be accessed via a [web-interface](https://www.ebi.ac.uk/chembl/), the [EBI-RDF platform](https://www.ncbi.nlm.nih.gov/pubmed/24413672) and the [ChEMBL webrescource client](https://github.com/chembl/chembl_webresource_client) ChEMBL web services* RESTful web service* ChEMBL web service version 2.x resource schema: *Figure 1:* "[ChEMBL web service schema diagram](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489243/figure/F2/). The oval shapes represent ChEMBL web service resources and the line between two resources indicates that they share a common attribute. The arrow direction shows where the primary information about a resource type can be found. A dashed line indicates the relationship between two resources behaves differently. For example, the `Image` resource provides a graphical based representation of a `Molecule`."Figure and description taken from: [Nucleic Acids Res. (2015), 43, 612-620](https://academic.oup.com/nar/article/43/W1/W612/2467881). ChEMBL webresource client* Python client library for accessing ChEMBL data* Handles interaction with the HTTPS protocol* Lazy evaluation of results -> reduced number of network requests Compound activity measures Activity measures: IC50 and Ki* [Half maximal inhibitory concentration](https://en.wikipedia.org/wiki/IC50)* Indicates how much of a particular drug or other substance is needed to inhibit a given biological process by half*Figure 2:* Visual demonstration of how to derive an IC50 value: (i) Arrange inhibition data on y-axis and log(concentration) on x-axis. (ii) Identify maximum and minimum inhibition. (iii) The IC50 is the concentration at which the curve passes through the 50% inhibition level. Figure ["Example IC50 curve demonstrating visually how IC50 is derived"](https://en.wikipedia.org/wiki/IC50/media/File:Example_IC50_curve_demonstrating_visually_how_IC50_is_derived.png) by JesseAlanGordon is licensed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). pKi valueThe equilibrium constant [KI](https://en.wikipedia.org/wiki/Equilibrium_constant) is related to the IC50 or [Half maximal inhibitory concentration](https://en.wikipedia.org/wiki/IC50) in some cases (see second link)* To facilitate the comparison of Ki values, which have a large value range and are given in different units (M, nM, ...), often pKi values are used* The pKi is the log of the IC50 value when converted to molar units: $ pKi_{i} = log_{10}(K_{i}) $, where $ K_{i}$ is specified in units of M* Higher pKi values indicate exponentially greater potency of the drug* Note that the conversion can be adapted to the respective Ki unit, e.g. for nM: $pK_{i} = log_{10}(K_{i}*10^{-9})= 9-log_{10}(K_{i}) $For the adenosine A2A receptor, most data is Ki data, which we will use in the remainder of this practical PracticalIn the following, we want to download all molecules that have been tested against our target of interest, the **epidermal growth factor receptor** ([**EGFR**](https://www.uniprot.org/uniprot/P00533)) kinase. Connect to ChEMBL database First, the ChEMBL webresource client as well as other Python libraries are imported.
###Code
import math
from pathlib import Path
from zipfile import ZipFile
from tempfile import TemporaryDirectory
import numpy as np
import pandas as pd
from rdkit.Chem import PandasTools
from chembl_webresource_client.new_client import new_client
from tqdm.auto import tqdm
HERE = Path(_dh[-1])
DATA = HERE / "data"
###Output
_____no_output_____
###Markdown
Next, we create resource objects for API access.
###Code
targets_api = new_client.target
compounds_api = new_client.molecule
bioactivities_api = new_client.activity
type(targets_api)
###Output
_____no_output_____
###Markdown
Get target data (adenosine A2A receptor)* Get UniProt ID of the target of interest (EGFR kinase: [P00533](http://www.uniprot.org/uniprot/P00533)) from [UniProt website](https://www.uniprot.org/)* Use UniProt ID to get target informationSelect a different UniProt ID, if you are interested in another target.
###Code
uniprot_id = "P29274"
###Output
_____no_output_____
###Markdown
Fetch target data from ChEMBL
###Code
# Get target information from ChEMBL but restrict it to specified values only
targets = targets_api.get(target_components__accession=uniprot_id).only(
"target_chembl_id", "organism", "pref_name", "target_type"
)
print(f'The type of the targets is "{type(targets)}"')
###Output
The type of the targets is "<class 'chembl_webresource_client.query_set.QuerySet'>"
###Markdown
Download target data from ChEMBLThe results of the query are stored in `targets`, a `QuerySet`, i.e. the results are not fetched from ChEMBL until we ask for it (here using `pandas.DataFrame.from_records`).More information about the `QuerySet` datatype:> QuerySets are lazy – the act of creating a QuerySet does not involve any database activity. You can stack filters together all day long, and Django will actually not run the query until the QuerySet is evaluated. ([querysets-are-lazy](https://docs.djangoproject.com/en/3.0/topics/db/queries/querysets-are-lazy))
###Code
targets = pd.DataFrame.from_records(targets)
targets
###Output
_____no_output_____
###Markdown
Select target (target ChEMBL ID)After checking the entries, we select the first entry as our target of interest:`CHEMBL203`: It is a single protein and represents the human Epidermal growth factor receptor (EGFR, also named erbB1)
###Code
target = targets.iloc[0]
target
###Output
_____no_output_____
###Markdown
Save selected ChEMBL ID.
###Code
chembl_id = target.target_chembl_id
print(f"The target ChEMBL ID is {chembl_id}")
# NBVAL_CHECK_OUTPUT
###Output
The target ChEMBL ID is CHEMBL251
###Markdown
Get bioactivity dataNow, we want to query bioactivity data for the target of interest. Fetch bioactivity data for the target from ChEMBL In this step, we fetch the bioactivity data and filter it to only consider* human proteins, * bioactivity type Ki, * exact measurements (relation `'='`), and* binding data (assay type `'B'`).
###Code
bioactivities = bioactivities_api.filter(
target_chembl_id=chembl_id, type="Ki", relation="=", assay_type="B"
).only(
"activity_id",
"assay_chembl_id",
"assay_description",
"assay_type",
"molecule_chembl_id",
"type",
"standard_units",
"relation",
"standard_value",
"target_chembl_id",
"target_organism",
)
print(f"Length and type of bioactivities object: {len(bioactivities)}, {type(bioactivities)}")
###Output
Length and type of bioactivities object: 4629, <class 'chembl_webresource_client.query_set.QuerySet'>
###Markdown
Our bioactivity set contains 4629 entries, each holding the following information.
###Code
print(f"Length and type of first element: {len(bioactivities[0])}, {type(bioactivities[0])}")
bioactivities[0]
###Output
Length and type of first element: 13, <class 'dict'>
###Markdown
Download bioactivity data from ChEMBL Finally, we download the `QuerySet` in the form of a `pandas` `DataFrame`. > **Note**: This step should not take more than 2 minutes, if so try to rerun all cells starting from _"Fetch bioactivity data for the target from ChEMBL"_ or read this message below: Load a local version of the data (in case you encounter any problems while fetching the data) If you experience difficulties to query the ChEMBL database, we also provide the resulting dataframe you will construct in the cell below. If you want to use the saved version, use the following code instead to obtain `bioactivities_df`: ```python replace first line in cell below with this other linebioactivities_df = pd.read_csv(DATA / "adenosineA2A_bioactivities_CHEMBL27.csv.zip", index_col=0)```
###Code
bioactivities_df = pd.read_csv(DATA / "adenosineA2A_bioactivities_CHEMBL27.csv.zip", index_col=0)
#bioactivities_df = pd.DataFrame.from_records(bioactivities)
print(f"DataFrame shape: {bioactivities_df.shape}")
bioactivities_df.head()
###Output
DataFrame shape: (4630, 13)
###Markdown
Note, that we have columns for `standard_units`/`units` and `standard_values`/`values` - in the following we will use the standardized columns (standardization by ChEMBL). Thus, we drop the other two columns.If we used the `units` and `values` columns, we would need to convert all values with many different units to nM:
###Code
bioactivities_df.to_csv(DATA / "adenosineA2A_bioactivities_CHEMBL27.csv.zip")
bioactivities_df["units"].unique()
bioactivities_df.drop(["units", "value"], axis=1, inplace=True)
bioactivities_df.head()
###Output
_____no_output_____
###Markdown
Preprocess and filter bioactivity data1. Convert `standard_value`'s datatype from `object` to `float`2. Delete entries with missing values3. Keep only entries with `standard_unit == nM`4. Delete duplicate molecules5. Reset `DataFrame` index6. Rename columns **1. Convert datatype of "standard_value" from "object" to "float"**The field `standard_value` holds standardized (here IC50) values. In order to make these values usable in calculations later on, convert values to floats.
###Code
bioactivities_df.dtypes
bioactivities_df = bioactivities_df.astype({"standard_value": "float64"})
bioactivities_df.dtypes
###Output
_____no_output_____
###Markdown
**2. Delete entries with missing values**Use the parameter `inplace=True` to drop values in the current `DataFrame` directly.
###Code
bioactivities_df.dropna(axis=0, how="any", inplace=True)
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (4630, 11)
###Markdown
**3. Keep only entries with "standard_unit == nM"** We only want to keep bioactivity entries in `nM`, thus we remove all entries with other units.
###Code
print(f"Units in downloaded data: {bioactivities_df['standard_units'].unique()}")
print(
f"Number of non-nM entries:\
{bioactivities_df[bioactivities_df['standard_units'] != 'nM'].shape[0]}"
)
bioactivities_df = bioactivities_df[bioactivities_df["standard_units"] == "nM"]
print(f"Units after filtering: {bioactivities_df['standard_units'].unique()}")
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (4630, 11)
###Markdown
**4. Delete duplicate molecules**Sometimes the same molecule (`molecule_chembl_id`) has been tested more than once, in this case, we only keep the first one.Note other choices could be to keep the one with the best value or a mean value of all assay results for the respective compound.
###Code
bioactivities_df.drop_duplicates("molecule_chembl_id", keep="first", inplace=True)
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (3838, 11)
###Markdown
**5. Reset "DataFrame" index**Since we deleted some rows, but we want to iterate over the index later, we reset the index to be continuous.
###Code
bioactivities_df.reset_index(drop=True, inplace=True)
bioactivities_df.head()
###Output
_____no_output_____
###Markdown
**6. Rename columns**
###Code
bioactivities_df.rename(
columns={"standard_value": "Ki", "standard_units": "units"}, inplace=True
)
bioactivities_df.head()
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (3838, 11)
###Markdown
We now have a set of **3838** molecule ids with respective Ki values for our target kinase. Get compound dataWe have a `DataFrame` containing all molecules tested against EGFR (with the respective measured bioactivity). Now, we want to get the molecular structures of the molecules that are linked to respective bioactivity ChEMBL IDs. Fetch compound data from ChEMBLLet's have a look at the compounds from ChEMBL which we have defined bioactivity data for: We fetch compound ChEMBL IDs and structures for the compounds linked to our filtered bioactivity data.
###Code
compounds_provider = compounds_api.filter(
molecule_chembl_id__in=list(bioactivities_df["molecule_chembl_id"])
).only("molecule_chembl_id", "molecule_structures")
###Output
_____no_output_____
###Markdown
Download compound data from ChEMBLAgain, we want to export the `QuerySet` object into a `pandas.DataFrame`. Given the data volume, **this can take some time.** For that reason, we will first obtain the list of records through `tqdm`, so we get a nice progress bar and some ETAs. We can then pass the list of compounds to the DataFrame.
###Code
compounds = list(tqdm(compounds_provider))
compounds_df = pd.DataFrame.from_records(
compounds,
)
print(f"DataFrame shape: {compounds_df.shape}")
compounds_df.head()
###Output
_____no_output_____
###Markdown
Preprocess and filter compound data1. Remove entries with missing entries2. Delete duplicate molecules (by molecule_chembl_id)3. Get molecules with canonical SMILES **1. Remove entries with missing molecule structure entry**
###Code
compounds_df.dropna(axis=0, how="any", inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (3838, 2)
###Markdown
**2. Delete duplicate molecules**
###Code
compounds_df.drop_duplicates("molecule_chembl_id", keep="first", inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (3838, 2)
###Markdown
**3. Get molecules with canonical SMILES**So far, we have multiple different molecular structure representations. We only want to keep the canonical SMILES.
###Code
compounds_df.iloc[0].molecule_structures.keys()
canonical_smiles = []
for i, compounds in compounds_df.iterrows():
try:
canonical_smiles.append(compounds["molecule_structures"]["canonical_smiles"])
except KeyError:
canonical_smiles.append(None)
compounds_df["smiles"] = canonical_smiles
compounds_df.drop("molecule_structures", axis=1, inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (3838, 2)
###Markdown
Sanity check: Remove all molecules without a canonical SMILES string.
###Code
compounds_df.dropna(axis=0, how="any", inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (3838, 2)
###Markdown
Output (bioactivity-compound) data**Summary of compound and bioactivity data**
###Code
print(f"Bioactivities filtered: {bioactivities_df.shape[0]}")
bioactivities_df.columns
print(f"Compounds filtered: {compounds_df.shape[0]}")
compounds_df.columns
###Output
Compounds filtered: 3838
###Markdown
Merge both datasetsMerge values of interest from `bioactivities_df` and `compounds_df` in an `output_df` based on the compounds' ChEMBL IDs (`molecule_chembl_id`), keeping the following columns:* ChEMBL IDs: `molecule_chembl_id`* SMILES: `smiles`* units: `units`* Ki: `Ki`
###Code
# Merge DataFrames
output_df = pd.merge(
bioactivities_df[["molecule_chembl_id", "Ki", "units"]],
compounds_df,
on="molecule_chembl_id",
)
# Reset row indices
output_df.reset_index(drop=True, inplace=True)
print(f"Dataset with {output_df.shape[0]} entries.")
output_df.dtypes
output_df.head(10)
###Output
_____no_output_____
###Markdown
Add pKi values As you can see the low Ki values are difficult to read (values are distributed over multiple scales), which is why we convert the Ki values to pKi.
###Code
def convert_Ki_to_pKi(Ki_value):
pKi_value = 9 - math.log10(Ki_value)
return pKi_value
# Apply conversion to each row of the compounds DataFrame
output_df["pKi"] = output_df.apply(lambda x: convert_Ki_to_pKi(x.Ki), axis=1)
output_df.head()
###Output
_____no_output_____
###Markdown
Draw compound dataLet's have a look at our collected data set.First, we plot the pIC50 value distribution
###Code
output_df.hist(column="pKi")
###Output
_____no_output_____
###Markdown
In the next steps, we add a column for RDKit molecule objects to our `DataFrame` and look at the structures of the molecules with the highest pKi values.
###Code
# Add molecule column
PandasTools.AddMoleculeColumnToFrame(output_df, smilesCol="smiles")
# Sort molecules by pKi
output_df.sort_values(by="pKi", ascending=False, inplace=True)
# Reset index
output_df.reset_index(drop=True, inplace=True)
###Output
_____no_output_____
###Markdown
Show the three most active molecules, i.e. molecules with the highest pKi values.
###Code
output_df.drop("smiles", axis=1).head(3)
# Prepare saving the dataset: Drop the ROMol column
output_df = output_df.drop("ROMol", axis=1)
print(f"DataFrame shape: {output_df.shape}")
###Output
DataFrame shape: (3838, 5)
###Markdown
Write output data to fileWe want to use this bioactivity-compound dataset in the following talktorials, thus we save the data as `csv` file. Note that it is advisable to drop the molecule column (which only contains an image of the molecules) when saving the data.
###Code
output_df.to_csv(DATA / "A2A_compounds.csv")
output_df.head()
print(f"DataFrame shape: {output_df.shape}")
# NBVAL_CHECK_OUTPUT
###Output
DataFrame shape: (3838, 5)
###Markdown
T001 · Compound data acquisition (ChEMBL)Authors:- Svetlana Leng, CADD seminar 2017, Volkamer lab, Charité/FU Berlin - Paula Junge, CADD seminar 2018, Volkamer lab, Charité/FU Berlin- Dominique Sydow, 2019-2020, [Volkamer lab, Charité](https://volkamerlab.org/)- Andrea Volkamer, 2020, [Volkamer lab, Charité](https://volkamerlab.org/)- Yonghui Chen, 2020, [Volkamer lab, Charité](https://volkamerlab.org/) __Talktorial T001__: This talktorial is part of the TeachOpenCADD pipeline described in the [first TeachOpenCADD paper](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-019-0351-x), comprising of talktorials T001-T010. Aim of this talktorialIn this notebook, we will learn more about the ChEMBL database and how to extract data from ChEMBL, i.e. (compound, activity data) pairs for a target of interest. These data sets can be used for many cheminformatics tasks, such as similarity search, clustering or machine learning.Our work here will include finding compounds which were tested against a certain target and filtering available bioactivity data. Contents in *Theory** ChEMBL database * ChEMBL web services * ChEMBL webresource client* Compound activity measures * IC50 measure * pIC50 value Contents in *Practical* **Goal: Get a list of compounds with bioactivity data for a given target*** Connect to ChEMBL database* Get target data (example: EGFR kinase) * Fetch and download target data * Select target ChEMBL ID* Get bioactivity data * Fetch and download bioactivity data for target * Preprocess and filter bioactivity data* Get compound data * Fetch and download compound data * Preprocess and filter compound data* Output bioactivity-compound data * Merge bioactivity and compound data, and add pIC50 values * Draw molecules with highest pIC50 * Freeze bioactivity data to ChEMBL 27 * Write output file References* ChEMBL bioactivity database: [Gaulton *et al.*, Nucleic Acids Res. (2017), 45(Database issue), D945–D954](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5210557/)* ChEMBL web services: [Davies *et al.*, Nucleic Acids Res. (2015), 43, 612-620](https://academic.oup.com/nar/article/43/W1/W612/2467881) * [ChEMBL web-interface](https://www.ebi.ac.uk/chembl/)* GitHub [ChEMBL webrescource client](https://github.com/chembl/chembl_webresource_client)* The EBI RDF platform: [Jupp *et al.*, Bioinformatics (2014), 30(9), 1338-9](https://www.ncbi.nlm.nih.gov/pubmed/24413672)* Info on half maximal inhibitory concentration: [(p)IC50](https://en.wikipedia.org/wiki/IC50)* [UniProt website](https://www.uniprot.org/) Theory ChEMBL database>"ChEMBL is a manually curated database of bioactive molecules with drug-like properties. It brings together chemical, bioactivity and genomic data to aid the translation of genomic information into effective new drugs." ([ChEMBL website](https://www.ebi.ac.uk/chembl/))* Open large-scale bioactivity database* **Current data content (as of 09.2020, ChEMBL 27):** * \>1.9 million distinct compounds * \>16 million activity values * Assays are mapped to ~13,000 targets* **Data sources** include scientific literature, PubChem bioassays, Drugs for Neglected Diseases Initiative (DNDi), BindingDB database, ...* ChEMBL data can be accessed via a [web-interface](https://www.ebi.ac.uk/chembl/), the [EBI-RDF platform](https://www.ncbi.nlm.nih.gov/pubmed/24413672) and the [ChEMBL webrescource client](https://github.com/chembl/chembl_webresource_client) ChEMBL web services* RESTful web service* ChEMBL web service version 2.x resource schema: *Figure 1:* "[ChEMBL web service schema diagram](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489243/figure/F2/). The oval shapes represent ChEMBL web service resources and the line between two resources indicates that they share a common attribute. The arrow direction shows where the primary information about a resource type can be found. A dashed line indicates the relationship between two resources behaves differently. For example, the `Image` resource provides a graphical based representation of a `Molecule`."Figure and description taken from: [Nucleic Acids Res. (2015), 43, 612-620](https://academic.oup.com/nar/article/43/W1/W612/2467881). ChEMBL webresource client* Python client library for accessing ChEMBL data* Handles interaction with the HTTPS protocol* Lazy evaluation of results -> reduced number of network requests Compound activity measures IC50 measure* [Half maximal inhibitory concentration](https://en.wikipedia.org/wiki/IC50)* Indicates how much of a particular drug or other substance is needed to inhibit a given biological process by half*Figure 2:* Visual demonstration of how to derive an IC50 value: (i) Arrange inhibition data on y-axis and log(concentration) on x-axis. (ii) Identify maximum and minimum inhibition. (iii) The IC50 is the concentration at which the curve passes through the 50% inhibition level. Figure ["Example IC50 curve demonstrating visually how IC50 is derived"](https://en.wikipedia.org/wiki/IC50/media/File:Example_IC50_curve_demonstrating_visually_how_IC50_is_derived.png) by JesseAlanGordon is licensed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). pIC50 value* To facilitate the comparison of IC50 values, which have a large value range and are given in different units (M, nM, ...), often pIC50 values are used* The pIC50 is the negative log of the IC50 value when converted to molar units: $ pIC_{50} = -log_{10}(IC_{50}) $, where $ IC_{50}$ is specified in units of M* Higher pIC50 values indicate exponentially greater potency of the drug* Note that the conversion can be adapted to the respective IC50 unit, e.g. for nM: $pIC_{50} = -log_{10}(IC_{50}*10^{-9})= 9-log_{10}(IC_{50}) $Other activity measures:Besides, IC50 and pIC50, other bioactivity measures are used, such as the equilibrium constant [KI](https://en.wikipedia.org/wiki/Equilibrium_constant) and the half maximal effective concentration [EC50](https://en.wikipedia.org/wiki/EC50). PracticalIn the following, we want to download all molecules that have been tested against our target of interest, the **epidermal growth factor receptor** ([**EGFR**](https://www.uniprot.org/uniprot/P00533)) kinase. Connect to ChEMBL database First, the ChEMBL webresource client as well as other Python libraries are imported.
###Code
import math
from pathlib import Path
from zipfile import ZipFile
from tempfile import TemporaryDirectory
import numpy as np
import pandas as pd
from rdkit.Chem import PandasTools
from chembl_webresource_client.new_client import new_client
from tqdm.auto import tqdm
HERE = Path(_dh[-1])
DATA = HERE / "data"
###Output
_____no_output_____
###Markdown
Next, we create resource objects for API access.
###Code
targets_api = new_client.target
compounds_api = new_client.molecule
bioactivities_api = new_client.activity
type(targets_api)
###Output
_____no_output_____
###Markdown
Get target data (EGFR kinase)* Get UniProt ID of the target of interest (EGFR kinase: [P00533](http://www.uniprot.org/uniprot/P00533)) from [UniProt website](https://www.uniprot.org/)* Use UniProt ID to get target informationSelect a different UniProt ID, if you are interested in another target.
###Code
uniprot_id = "P00533"
###Output
_____no_output_____
###Markdown
Fetch target data from ChEMBL
###Code
# Get target information from ChEMBL but restrict it to specified values only
targets = targets_api.get(target_components__accession=uniprot_id).only(
"target_chembl_id", "organism", "pref_name", "target_type"
)
print(f'The type of the targets is "{type(targets)}"')
###Output
The type of the targets is "<class 'chembl_webresource_client.query_set.QuerySet'>"
###Markdown
Download target data from ChEMBLThe results of the query are stored in `targets`, a `QuerySet`, i.e. the results are not fetched from ChEMBL until we ask for it (here using `pandas.DataFrame.from_records`).More information about the `QuerySet` datatype:> QuerySets are lazy – the act of creating a QuerySet does not involve any database activity. You can stack filters together all day long, and Django will actually not run the query until the QuerySet is evaluated. ([querysets-are-lazy](https://docs.djangoproject.com/en/3.0/topics/db/queries/querysets-are-lazy))
###Code
targets = pd.DataFrame.from_records(targets)
targets
###Output
_____no_output_____
###Markdown
Select target (target ChEMBL ID)After checking the entries, we select the first entry as our target of interest:`CHEMBL203`: It is a single protein and represents the human Epidermal growth factor receptor (EGFR, also named erbB1)
###Code
target = targets.iloc[0]
target
###Output
_____no_output_____
###Markdown
Save selected ChEMBL ID.
###Code
chembl_id = target.target_chembl_id
print(f"The target ChEMBL ID is {chembl_id}")
# NBVAL_CHECK_OUTPUT
###Output
The target ChEMBL ID is CHEMBL203
###Markdown
Get bioactivity dataNow, we want to query bioactivity data for the target of interest. Fetch bioactivity data for the target from ChEMBL In this step, we fetch the bioactivity data and filter it to only consider* human proteins, * bioactivity type IC50, * exact measurements (relation `'='`), and* binding data (assay type `'B'`).
###Code
bioactivities = bioactivities_api.filter(
target_chembl_id=chembl_id, type="IC50", relation="=", assay_type="B"
).only(
"activity_id",
"assay_chembl_id",
"assay_description",
"assay_type",
"molecule_chembl_id",
"type",
"standard_units",
"relation",
"standard_value",
"target_chembl_id",
"target_organism",
)
print(f"Length and type of bioactivities object: {len(bioactivities)}, {type(bioactivities)}")
###Output
Length and type of bioactivities object: 8816, <class 'chembl_webresource_client.query_set.QuerySet'>
###Markdown
Our bioactivity set contains 8463 entries, each holding the following information.
###Code
print(f"Length and type of first element: {len(bioactivities[0])}, {type(bioactivities[0])}")
bioactivities[0]
###Output
Length and type of first element: 13, <class 'dict'>
###Markdown
Download bioactivity data from ChEMBL Finally, we download the `QuerySet` in the form of a `pandas` `DataFrame`. > **Note**: This step should not take more than 2 minutes, if so try to rerun all cells starting from _"Fetch bioactivity data for the target from ChEMBL"_ or read this message below: Load a local version of the data (in case you encounter any problems while fetching the data) If you experience difficulties to query the ChEMBL database, we also provide the resulting dataframe you will construct in the cell below. If you want to use the saved version, use the following code instead to obtain `bioactivities_df`: ```python replace first line in cell below with this other linebioactivities_df = pd.read_csv(DATA / "EGFR_bioactivities_CHEMBL27.csv.zip", index_col=0)```
###Code
bioactivities_df = pd.DataFrame.from_records(bioactivities)
print(f"DataFrame shape: {bioactivities_df.shape}")
bioactivities_df.head()
###Output
DataFrame shape: (8817, 13)
###Markdown
Note, that we have columns for `standard_units`/`units` and `standard_values`/`values` - in the following we will use the standardized columns (standardization by ChEMBL). Thus, we drop the other two columns.If we used the `units` and `values` columns, we would need to convert all values with many different units to nM:
###Code
bioactivities_df["units"].unique()
bioactivities_df.drop(["units", "value"], axis=1, inplace=True)
bioactivities_df.head()
###Output
_____no_output_____
###Markdown
Preprocess and filter bioactivity data1. Convert `standard_value`'s datatype from `object` to `float`2. Delete entries with missing values3. Keep only entries with `standard_unit == nM`4. Delete duplicate molecules5. Reset `DataFrame` index6. Rename columns **1. Convert datatype of "standard_value" from "object" to "float"**The field `standard_value` holds standardized (here IC50) values. In order to make these values usable in calculations later on, convert values to floats.
###Code
bioactivities_df.dtypes
bioactivities_df = bioactivities_df.astype({"standard_value": "float64"})
bioactivities_df.dtypes
###Output
_____no_output_____
###Markdown
**2. Delete entries with missing values**Use the parameter `inplace=True` to drop values in the current `DataFrame` directly.
###Code
bioactivities_df.dropna(axis=0, how="any", inplace=True)
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (8816, 11)
###Markdown
**3. Keep only entries with "standard_unit == nM"** We only want to keep bioactivity entries in `nM`, thus we remove all entries with other units.
###Code
print(f"Units in downloaded data: {bioactivities_df['standard_units'].unique()}")
print(
f"Number of non-nM entries:\
{bioactivities_df[bioactivities_df['standard_units'] != 'nM'].shape[0]}"
)
bioactivities_df = bioactivities_df[bioactivities_df["standard_units"] == "nM"]
print(f"Units after filtering: {bioactivities_df['standard_units'].unique()}")
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (8747, 11)
###Markdown
**4. Delete duplicate molecules**Sometimes the same molecule (`molecule_chembl_id`) has been tested more than once, in this case, we only keep the first one.Note other choices could be to keep the one with the best value or a mean value of all assay results for the respective compound.
###Code
bioactivities_df.drop_duplicates("molecule_chembl_id", keep="first", inplace=True)
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (6059, 11)
###Markdown
**5. Reset "DataFrame" index**Since we deleted some rows, but we want to iterate over the index later, we reset the index to be continuous.
###Code
bioactivities_df.reset_index(drop=True, inplace=True)
bioactivities_df.head()
###Output
_____no_output_____
###Markdown
**6. Rename columns**
###Code
bioactivities_df.rename(
columns={"standard_value": "IC50", "standard_units": "units"}, inplace=True
)
bioactivities_df.head()
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (6059, 11)
###Markdown
We now have a set of **5575** molecule ids with respective IC50 values for our target kinase. Get compound dataWe have a `DataFrame` containing all molecules tested against EGFR (with the respective measured bioactivity). Now, we want to get the molecular structures of the molecules that are linked to respective bioactivity ChEMBL IDs. Fetch compound data from ChEMBLLet's have a look at the compounds from ChEMBL which we have defined bioactivity data for: We fetch compound ChEMBL IDs and structures for the compounds linked to our filtered bioactivity data.
###Code
compounds_provider = compounds_api.filter(
molecule_chembl_id__in=list(bioactivities_df["molecule_chembl_id"])
).only("molecule_chembl_id", "molecule_structures")
###Output
_____no_output_____
###Markdown
Download compound data from ChEMBLAgain, we want to export the `QuerySet` object into a `pandas.DataFrame`. Given the data volume, **this can take some time.** For that reason, we will first obtain the list of records through `tqdm`, so we get a nice progress bar and some ETAs. We can then pass the list of compounds to the DataFrame.
###Code
compounds = list(tqdm(compounds_provider))
compounds_df = pd.DataFrame.from_records(
compounds,
)
print(f"DataFrame shape: {compounds_df.shape}")
compounds_df.head()
###Output
_____no_output_____
###Markdown
Preprocess and filter compound data1. Remove entries with missing entries2. Delete duplicate molecules (by molecule_chembl_id)3. Get molecules with canonical SMILES **1. Remove entries with missing molecule structure entry**
###Code
compounds_df.dropna(axis=0, how="any", inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (6052, 2)
###Markdown
**2. Delete duplicate molecules**
###Code
compounds_df.drop_duplicates("molecule_chembl_id", keep="first", inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (6052, 2)
###Markdown
**3. Get molecules with canonical SMILES**So far, we have multiple different molecular structure representations. We only want to keep the canonical SMILES.
###Code
compounds_df.iloc[0].molecule_structures.keys()
canonical_smiles = []
for i, compounds in compounds_df.iterrows():
try:
canonical_smiles.append(compounds["molecule_structures"]["canonical_smiles"])
except KeyError:
canonical_smiles.append(None)
compounds_df["smiles"] = canonical_smiles
compounds_df.drop("molecule_structures", axis=1, inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (6052, 2)
###Markdown
Sanity check: Remove all molecules without a canonical SMILES string.
###Code
compounds_df.dropna(axis=0, how="any", inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (6052, 2)
###Markdown
Output (bioactivity-compound) data**Summary of compound and bioactivity data**
###Code
print(f"Bioactivities filtered: {bioactivities_df.shape[0]}")
bioactivities_df.columns
print(f"Compounds filtered: {compounds_df.shape[0]}")
compounds_df.columns
###Output
Compounds filtered: 6052
###Markdown
Merge both datasetsMerge values of interest from `bioactivities_df` and `compounds_df` in an `output_df` based on the compounds' ChEMBL IDs (`molecule_chembl_id`), keeping the following columns:* ChEMBL IDs: `molecule_chembl_id`* SMILES: `smiles`* units: `units`* IC50: `IC50`
###Code
# Merge DataFrames
output_df = pd.merge(
bioactivities_df[["molecule_chembl_id", "IC50", "units"]],
compounds_df,
on="molecule_chembl_id",
)
# Reset row indices
output_df.reset_index(drop=True, inplace=True)
print(f"Dataset with {output_df.shape[0]} entries.")
output_df.dtypes
output_df.head(10)
###Output
_____no_output_____
###Markdown
Add pIC50 values As you can see the low IC50 values are difficult to read (values are distributed over multiple scales), which is why we convert the IC50 values to pIC50.
###Code
def convert_ic50_to_pic50(IC50_value):
pIC50_value = 9 - math.log10(IC50_value)
return pIC50_value
# Apply conversion to each row of the compounds DataFrame
output_df["pIC50"] = output_df.apply(lambda x: convert_ic50_to_pic50(x.IC50), axis=1)
output_df.head()
###Output
_____no_output_____
###Markdown
Draw compound dataLet's have a look at our collected data set.First, we plot the pIC50 value distribution
###Code
output_df.hist(column="pIC50")
###Output
_____no_output_____
###Markdown
In the next steps, we add a column for RDKit molecule objects to our `DataFrame` and look at the structures of the molecules with the highest pIC50 values.
###Code
# Add molecule column
PandasTools.AddMoleculeColumnToFrame(output_df, smilesCol="smiles")
# Sort molecules by pIC50
output_df.sort_values(by="pIC50", ascending=False, inplace=True)
# Reset index
output_df.reset_index(drop=True, inplace=True)
###Output
_____no_output_____
###Markdown
Show the three most active molecules, i.e. molecules with the highest pIC50 values.
###Code
output_df.drop("smiles", axis=1).head(3)
# Prepare saving the dataset: Drop the ROMol column
output_df = output_df.drop("ROMol", axis=1)
print(f"DataFrame shape: {output_df.shape}")
###Output
DataFrame shape: (6052, 5)
###Markdown
Freeze output data to ChEMBL 27This is a technical step: Usually, we would continue to work with the dataset that we just created (latest dataset). However, here on the TeachOpenCADD platform, we prefer to freeze the dataset to a certain ChEMBL releases (i.e. [ChEMBL 27](http://doi.org/10.6019/CHEMBL.database.27)), so that this talktorial and other talktorials downstream in our CADD pipeline do not change in the future (helping us to maintain the talktorials). Note: If you prefer to run this notebook on the latest dataset or if you want to use it for another target, please comment the cell below.
###Code
# Disable this cell to unfreeze the dataset
output_df = pd.read_csv(
DATA / "EGFR_compounds_ea055ef.csv", index_col=0, float_precision="round_trip"
)
output_df.head()
print(f"DataFrame shape: {output_df.shape}")
# NBVAL_CHECK_OUTPUT
###Output
DataFrame shape: (5568, 5)
###Markdown
Write output data to fileWe want to use this bioactivity-compound dataset in the following talktorials, thus we save the data as `csv` file. Note that it is advisable to drop the molecule column (which only contains an image of the molecules) when saving the data.
###Code
output_df.to_csv(DATA / "EGFR_compounds.csv")
output_df.head()
print(f"DataFrame shape: {output_df.shape}")
# NBVAL_CHECK_OUTPUT
###Output
DataFrame shape: (5568, 5)
###Markdown
T001 · Compound data acquisition (ChEMBL)Authors:- Svetlana Leng, CADD seminar 2017, Volkamer lab, Charité/FU Berlin - Paula Junge, CADD seminar 2018, Volkamer lab, Charité/FU Berlin- Dominique Sydow, 2019-2020, [Volkamer lab, Charité](https://volkamerlab.org/)- Andrea Volkamer, 2020, [Volkamer lab, Charité](https://volkamerlab.org/)- Yonghui Chen, 2020, [Volkamer lab, Charité](https://volkamerlab.org/) __Talktorial T001__: This talktorial is part of the TeachOpenCADD pipeline described in the [first TeachOpenCADD paper](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-019-0351-x), comprising of talktorials T001-T010. Aim of this talktorialIn this notebook, we will learn more about the ChEMBL database and how to extract data from ChEMBL, i.e. (compound, activity data) pairs for a target of interest. These data sets can be used for many cheminformatics tasks, such as similarity search, clustering or machine learning.Our work here will include finding compounds which were tested against a certain target and filtering available bioactivity data. Contents in *Theory** ChEMBL database * ChEMBL web services * ChEMBL webresource client* Compound activity measures * IC50 measure * pIC50 value Contents in *Practical* **Goal: Get a list of compounds with bioactivity data for a given target*** Connect to ChEMBL database* Get target data (example: EGFR kinase) * Fetch and download target data * Select target ChEMBL ID* Get bioactivity data * Fetch and download bioactivity data for target * Preprocess and filter bioactivities* Get compound data * Fetch and download compound data * Preprocess and filter compound data* Output bioactivity-compound data * Merge bioactivity and compound data, and add pIC50 values * Draw molecules with highest pIC50 * Write output file References* ChEMBL bioactivity database: [Gaulton *et al.*, Nucleic Acids Res. (2017), 45(Database issue), D945–D954](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5210557/)* ChEMBL web services: [Davies *et al.*, Nucleic Acids Res. (2015), 43, 612-620](https://academic.oup.com/nar/article/43/W1/W612/2467881) * [ChEMBL web-interface](https://www.ebi.ac.uk/chembl/)* GitHub [ChEMBL webrescource client](https://github.com/chembl/chembl_webresource_client)* [myChEMBL web services version 2.x](https://github.com/chembl/mychembl/blob/master/ipython_notebooks/09_myChEMBL_web_services.ipynb)* The EBI RDF platform: [Jupp *et al.*, Bioinformatics (2014), 30(9), 1338-9](https://www.ncbi.nlm.nih.gov/pubmed/24413672)* Info on half maximal inhibitory concentration: [(p)IC50](https://en.wikipedia.org/wiki/IC50)* [UniProt website](https://www.uniprot.org/) Theory ChEMBL database>"ChEMBL is a manually curated database of bioactive molecules with drug-like properties. It brings together chemical, bioactivity and genomic data to aid the translation of genomic information into effective new drugs." ([ChEMBL website](https://www.ebi.ac.uk/chembl/))* Open large-scale bioactivity database* **Current data content (as of 09.2020, ChEMBL 27):** * \>1.9 million distinct compounds * \>16 million activity values * Assays are mapped to ~13,000 targets* **Data sources** include scientific literature, PubChem bioassays, Drugs for Neglected Diseases Initiative (DNDi), BindingDB database, ...* ChEMBL data can be accessed via a [web-interface](https://www.ebi.ac.uk/chembl/), the [EBI-RDF platform](https://www.ncbi.nlm.nih.gov/pubmed/24413672) and the [ChEMBL web services](https://github.com/chembl/mychembl/blob/master/ipython_notebooks/09_myChEMBL_web_services.ipynb) ChEMBL web services* RESTful web service* ChEMBL web service version 2.x resource schema: *Figure 1:* "[ChEMBL web service schema diagram](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489243/figure/F2/). The oval shapes represent ChEMBL web service resources and the line between two resources indicates that they share a common attribute. The arrow direction shows where the primary information about a resource type can be found. A dashed line indicates the relationship between two resources behaves differently. For example, the `Image` resource provides a graphical based representation of a `Molecule`."Figure and description taken from: [Nucleic Acids Res. (2015), 43, 612-620](https://academic.oup.com/nar/article/43/W1/W612/2467881). ChEMBL webresource client* Python client library for accessing ChEMBL data* Handles interaction with the HTTPS protocol* Lazy evaluation of results -> reduced number of network requests Compound activity measures IC50 measure* [Half maximal inhibitory concentration](https://en.wikipedia.org/wiki/IC50)* Indicates how much of a particular drug or other substance is needed to inhibit a given biological process by half*Figure 2:* Visual demonstration of how to derive an IC50 value: (i) Arrange inhibition data on y-axis and log(concentration) on x-axis. (ii) Identify maximum and minimum inhibition. (iii) The IC50 is the concentration at which the curve passes through the 50% inhibition level. Figure ["Example IC50 curve demonstrating visually how IC50 is derived"](https://en.wikipedia.org/wiki/IC50/media/File:Example_IC50_curve_demonstrating_visually_how_IC50_is_derived.png) by JesseAlanGordon is licensed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). pIC50 value* To facilitate the comparison of IC50 values, which have a large value range and are given in different units (M, nM, ...), often pIC50 values are used* The pIC50 is the negative log of the IC50 value when converted to molar units: $ pIC_{50} = -log_{10}(IC_{50}) $, where $ IC_{50}$ is specified in units of M* Higher pIC50 values indicate exponentially greater potency of the drug* Note that the conversion can be adapted to the respective IC50 unit, e.g. for nM: $pIC_{50} = -log_{10}(IC_{50}*10^{-9})= 9-log_{10}(IC_{50}) $Other activity measures:Besides, IC50 and pIC50, other bioactivity measures are used, such as the equilibrium constant [KI](https://en.wikipedia.org/wiki/Equilibrium_constant) and the half maximal effective concentration [EC50](https://en.wikipedia.org/wiki/EC50). PracticalIn the following, we want to download all molecules that have been tested against our target of interest, the **epidermal growth factor receptor** ([**EGFR**](https://www.uniprot.org/uniprot/P00533)) kinase. Connect to ChEMBL database First, the ChEMBL webresource client as well as other python libraries are imported.
###Code
import math
from pathlib import Path
from zipfile import ZipFile
from tempfile import TemporaryDirectory
import numpy as np
import pandas as pd
from rdkit.Chem import PandasTools
from chembl_webresource_client.new_client import new_client
HERE = Path(_dh[-1])
DATA = HERE / "data"
###Output
_____no_output_____
###Markdown
Next, we create resource objects for API access.
###Code
targets_api = new_client.target
compounds_api = new_client.molecule
bioactivities_api = new_client.activity
type(targets_api)
###Output
_____no_output_____
###Markdown
Get target data (EGFR kinase)* Get UniProt ID of the target of interest (EGFR kinase: [P00533](http://www.uniprot.org/uniprot/P00533)) from [UniProt website](https://www.uniprot.org/)* Use UniProt ID to get target informationSelect a different UniProt ID, if you are interested in another target.
###Code
uniprot_id = "P00533"
###Output
_____no_output_____
###Markdown
Fetch target data from ChEMBL
###Code
# Get target information from ChEMBL but restrict it to specified values only
targets = targets_api.get(target_components__accession=uniprot_id).only(
"target_chembl_id", "organism", "pref_name", "target_type"
)
print(f'The type of the targets is "{type(targets)}"')
###Output
The type of the targets is "<class 'chembl_webresource_client.query_set.QuerySet'>"
###Markdown
Download target data from ChEMBLThe results of the query are stored in `targets`, a `QuerySet`, i.e. the results are not fetched from ChEMBL until we ask for it (here using `pandas.DataFrame.from_records`).More information about the `QuerySet` datatype:> QuerySets are lazy – the act of creating a QuerySet does not involve any database activity. You can stack filters together all day long, and Django will actually not run the query until the QuerySet is evaluated. ([querysets-are-lazy](https://docs.djangoproject.com/en/3.0/topics/db/queries/querysets-are-lazy))
###Code
targets = pd.DataFrame.from_records(targets)
targets
###Output
_____no_output_____
###Markdown
Select target (target ChEMBL ID)After checking the entries, we select the first entry as our target of interest:`CHEMBL203`: It is a single protein and represents the human Epidermal growth factor receptor (EGFR, also named erbB1)
###Code
target = targets.iloc[0]
target
###Output
_____no_output_____
###Markdown
Save selected ChEMBL ID.
###Code
chembl_id = target.target_chembl_id
print(f"The target ChEMBL ID is {chembl_id}")
# NBVAL_CHECK_OUTPUT
###Output
The target ChEMBL ID is CHEMBL203
###Markdown
Get bioactivity dataNow, we want to query bioactivity data for the target of interest. Fetch bioactivity data for the target from ChEMBL In this step, we fetch the bioactivity data and filter it to only consider* human proteins, * bioactivity type IC50, * exact measurements (relation `'='`), and* binding data (assay type `'B'`).
###Code
bioactivities = bioactivities_api.filter(
target_chembl_id=chembl_id, type="IC50", relation="=", assay_type="B"
).only(
"activity_id",
"assay_chembl_id",
"assay_description",
"assay_type",
"molecule_chembl_id",
"type",
"standard_units",
"relation",
"standard_value",
"target_chembl_id",
"target_organism",
)
print(f"Length and type of bioactivities object: {len(bioactivities)}, {type(bioactivities)}")
# NBVAL_CHECK_OUTPUT
###Output
Length and type of bioactivities object: 7177, <class 'chembl_webresource_client.query_set.QuerySet'>
###Markdown
Our bioactivity set contains 7177 entries, each holding the following information.
###Code
print(f"Length and type of first element: {len(bioactivities[0])}, {type(bioactivities[0])}")
bioactivities[0]
###Output
Length and type of first element: 13, <class 'dict'>
###Markdown
Download bioactivity data from ChEMBL Finally, we download the `QuerySet` in the form of a `pandas` `DataFrame`. > **Note**: This step should not take more than 2 minutes, if so try to rerun all cells starting from _"Fetch bioactivity data for the target from ChEMBL"_ or read this message below: Load a local version of the data (in case you encounter any problems while fetching the data) If you experience difficulties to query the ChEMBL database, we also provide the resulting dataframe you will construct in the cell below. If you want to use the saved version, use the following code instead to obtain `bioactivities_df`: ```python replace first line in cell below with this other linebioactivities_df = pd.read_csv(DATA / "EGFR_compounds_CHEMBL27.csv.zip", index_col=0)```
###Code
bioactivities_df = pd.DataFrame.from_records(bioactivities)
print(f"DataFrame shape: {bioactivities_df.shape}")
bioactivities_df.head()
###Output
DataFrame shape: (7178, 13)
###Markdown
Note, that we have columns for `standard_units`/`units` and `standard_values`/`values` - in the following we will use the standardized columns (standardization by ChEMBL). Thus, we drop the other two columns.If we used the `units` and `values` columns, we would need to convert all values with many different units to nM:
###Code
bioactivities_df["units"].unique()
bioactivities_df.drop(["units", "value"], axis=1, inplace=True)
bioactivities_df.head()
###Output
_____no_output_____
###Markdown
Freeze bioactivity data to ChEMBL 27This is a technical step: Usually, we would continue to work with the dataset that we just downloaded (latest dataset). However, here on the TeachOpenCADD platform, we prefer to freeze the dataset to a certain ChEMBL releases (i.e. [ChEMBL 27](http://doi.org/10.6019/CHEMBL.database.27)), so that this talktorial and other talktorials downstream in our CADD pipeline do not change in the future (helping us to maintain the talktorials). This cell will load bioactivity IDs in ChEMBL 27 release from file `data/chembl27_activities.npz.zip`. We have to uncompress it to a temporary file first, and then load it with `numpy`.If you are interested, you can check out how this file was generated in `data/all_chembl_activities.ipynb`.
###Code
with ZipFile(DATA / "chembl27_activities.npz.zip") as z, TemporaryDirectory() as tmpdir:
z.extract("chembl27_activities.npz", tmpdir)
with np.load(Path(tmpdir) / "chembl27_activities.npz") as f:
bioactivity_ids_chembl_27 = set(f["activities"])
print(f"Number of bioactivity values in full ChEMBL 27 release: {len(bioactivity_ids_chembl_27)}")
# NBVAL_CHECK_OUTPUT
###Output
Number of bioactivity values in full ChEMBL 27 release: 16066124
###Markdown
Keep only bioactivities with bioactivity IDs from the ChEMBL 27 release.
###Code
print(f"Number of bioactivities queried for EGFR in this notebook: {bioactivities_df.shape[0]}")
# Get the intersection between the current and the frozen ChEMBL version
bioactivities_df = bioactivities_df[
bioactivities_df["activity_id"].isin(bioactivity_ids_chembl_27)
].copy()
# NBVAL_CHECK_OUTPUT
print(f"Number of bioactivities after ChEMBL 27 intersection: {bioactivities_df.shape[0]}")
###Output
Number of bioactivities queried for EGFR in this notebook: 7178
Number of bioactivities after ChEMBL 27 intersection: 7178
###Markdown
Note: If these numbers are the same, it means the latest ChEMBL release is still ChEMBL 27. You can check out the current [ChEMBL](https://chembl.gitbook.io/chembl-interface-documentation/downloads) release. Preprocess and filter bioactivity data1. Convert `standard_value`'s datatype from `object` to `float`2. Delete entries with missing values3. Keep only entries with `standard_unit == nM`4. Delete duplicate molecules5. Reset `DataFrame` index6. Rename columns **1. Convert datatype of "standard_value" from "object" to "float"**The field `standard_value` holds standardized (here IC50) values. In order to make these values usable in calculations later on, convert values to floats.
###Code
bioactivities_df.dtypes
bioactivities_df = bioactivities_df.astype({"standard_value": "float64"})
bioactivities_df.dtypes
###Output
_____no_output_____
###Markdown
**2. Delete entries with missing values**Use the parameter `inplace=True` to drop values in the current `DataFrame` directly.
###Code
bioactivities_df.dropna(axis=0, how="any", inplace=True)
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (7177, 11)
###Markdown
**3. Keep only entries with "standard_unit == nM"** We only want to keep bioactivity entries in `nM`, thus we remove all entries with other units.
###Code
print(f"Units in downloaded data: {bioactivities_df['standard_units'].unique()}")
print(
f"Number of non-nM entries:\
{bioactivities_df[bioactivities_df['standard_units'] != 'nM'].shape[0]}"
)
bioactivities_df = bioactivities_df[bioactivities_df["standard_units"] == "nM"]
print(f"Units after filtering: {bioactivities_df['standard_units'].unique()}")
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (7113, 11)
###Markdown
**4. Delete duplicate molecules**Sometimes the same molecule (`molecule_chembl_id`) has been tested more than once, in this case, we only keep the first one.Note other choices could be to keep the one with the best value or a mean value of all assay results for the respective compound.
###Code
bioactivities_df.drop_duplicates("molecule_chembl_id", keep="first", inplace=True)
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (5451, 11)
###Markdown
**5. Reset "DataFrame" index**Since we deleted some rows, but we want to iterate over the index later, we reset the index to be continuous.
###Code
bioactivities_df.reset_index(drop=True, inplace=True)
bioactivities_df.head()
###Output
_____no_output_____
###Markdown
**6. Rename columns**
###Code
bioactivities_df.rename(
columns={"standard_value": "IC50", "standard_units": "units"}, inplace=True
)
bioactivities_df.head()
print(f"DataFrame shape: {bioactivities_df.shape}")
# NBVAL_CHECK_OUTPUT
###Output
DataFrame shape: (5451, 11)
###Markdown
We now have a set of **5451** molecule ids with respective IC50 values for our target kinase. Get compound dataWe have a `DataFrame` containing all molecules tested against EGFR (with the respective measured bioactivity). Now, we want to get the molecular structures of the molecules that are linked to respective bioactivity ChEMBL IDs. Fetch compound data from ChEMBLLet's have a look at the compounds from ChEMBL which we have defined bioactivity data for: We fetch compound ChEMBL IDs and structures for the compounds linked to our filtered bioactivity data.
###Code
compounds = compounds_api.filter(
molecule_chembl_id__in=list(bioactivities_df["molecule_chembl_id"])
).only("molecule_chembl_id", "molecule_structures")
###Output
_____no_output_____
###Markdown
Download compound data from ChEMBLAgain, we download the `QuerySet` in the form of a `pandas` `DataFrame`. **This may take some time.**
###Code
compounds_df = pd.DataFrame.from_records(compounds)
print(f"DataFrame shape: {compounds_df.shape}")
compounds_df.head()
###Output
_____no_output_____
###Markdown
Preprocess and filter compound data1. Remove entries with missing entries2. Delete duplicate molecules (by molecule_chembl_id)3. Get molecules with canonical SMILES **1. Remove entries with missing molecule structure entry**
###Code
compounds_df.dropna(axis=0, how="any", inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (5445, 2)
###Markdown
**2. Delete duplicate molecules**
###Code
compounds_df.drop_duplicates("molecule_chembl_id", keep="first", inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (5444, 2)
###Markdown
**3. Get molecules with canonical SMILES**So far, we have multiple different molecular structure representations. We only want to keep the canonical SMILES.
###Code
compounds_df.iloc[0].molecule_structures.keys()
canonical_smiles = []
for i, compounds in compounds_df.iterrows():
try:
canonical_smiles.append(compounds["molecule_structures"]["canonical_smiles"])
except KeyError:
canonical_smiles.append(None)
compounds_df["smiles"] = canonical_smiles
compounds_df.drop("molecule_structures", axis=1, inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (5444, 2)
###Markdown
Sanity check: Remove all molecules without a canonical SMILES string.
###Code
compounds_df.dropna(axis=0, how="any", inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
# NBVAL_CHECK_OUTPUT
###Output
DataFrame shape: (5444, 2)
###Markdown
We now have a set of **5444** molecule ids with respective IC50 values for our target kinase. Output (bioactivity-compound) data**Summary of compound and bioactivity data**
###Code
print(f"Bioactivities filtered: {bioactivities_df.shape[0]}")
bioactivities_df.columns
print(f"Compounds filtered: {compounds_df.shape[0]}")
compounds_df.columns
###Output
Compounds filtered: 5444
###Markdown
Merge both datasetsMerge values of interest from `bioactivities_df` and `compounds_df` in an `output_df` based on the compounds' ChEMBL IDs (`molecule_chembl_id`), keeping the following columns:* ChEMBL IDs: `molecule_chembl_id`* SMILES: `smiles`* units: `units`* IC50: `IC50`
###Code
# Merge DataFrames
output_df = pd.merge(
bioactivities_df[["molecule_chembl_id", "IC50", "units"]],
compounds_df,
on="molecule_chembl_id",
)
# Reset row indices
output_df.reset_index(drop=True, inplace=True)
print(f"Dataset with {output_df.shape[0]} entries.")
# NBVAL_CHECK_OUTPUT
###Output
Dataset with 5444 entries.
###Markdown
Sanity check: The merged bioactivities/compound data set contains **5444** entries.
###Code
output_df.dtypes
output_df.head(10)
###Output
_____no_output_____
###Markdown
Add pIC50 values As you can see the low IC50 values are difficult to read (values are distributed over multiple scales), which is why we convert the IC50 values to pIC50.
###Code
def convert_ic50_to_pic50(IC50_value):
pIC50_value = 9 - math.log10(IC50_value)
return pIC50_value
# Apply conversion to each row of the compounds DataFrame
output_df["pIC50"] = output_df.apply(lambda x: convert_ic50_to_pic50(x.IC50), axis=1)
output_df.head()
###Output
_____no_output_____
###Markdown
Draw compound dataLet's have a look at our collected data set.First, we plot the pIC50 value distribution
###Code
output_df.hist(column="pIC50")
###Output
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-7xhr_mbj because the default path (/home/andrea/.cache/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
###Markdown
In the next steps, we add a column for RDKit molecule objects to our `DataFrame` and look at the structures of the molecules with the highest pIC50 values.
###Code
# Add molecule column
PandasTools.AddMoleculeColumnToFrame(output_df, smilesCol="smiles")
# Sort molecules by pIC50
output_df.sort_values(by="pIC50", ascending=False, inplace=True)
# Reset index
output_df.reset_index(drop=True, inplace=True)
###Output
_____no_output_____
###Markdown
Show the three most active molecules, i.e. molecules with the highest pIC50 values.
###Code
output_df.drop("smiles", axis=1).head(3)
###Output
_____no_output_____
###Markdown
Write output data to fileWe want to use this bioactivity-compound dataset in the following talktorials, thus we save the data as `csv` file. Note that it is advisable to drop the molecule column (which only contains an image of the molecules) when saving the data.
###Code
output_df.drop("ROMol", axis=1).to_csv(DATA / "EGFR_compounds.csv")
print(f"DataFrame shape: {output_df.shape}")
###Output
DataFrame shape: (5444, 6)
###Markdown
T001 · Compound data acquisition (ChEMBL)**Note:** This talktorial is a part of TeachOpenCADD, a platform that aims to teach domain-specific skills and to provide pipeline templates as starting points for research projects.Authors:- Svetlana Leng, CADD seminar 2017, Volkamer lab, Charité/FU Berlin - Paula Junge, CADD seminar 2018, Volkamer lab, Charité/FU Berlin- Dominique Sydow, 2019-2020, [Volkamer lab, Charité](https://volkamerlab.org/)- Andrea Volkamer, 2020, [Volkamer lab, Charité](https://volkamerlab.org/)- Yonghui Chen, 2020, [Volkamer lab, Charité](https://volkamerlab.org/) __Talktorial T001__: This talktorial is part of the TeachOpenCADD pipeline described in the [first TeachOpenCADD paper](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-019-0351-x), comprising of talktorials T001-T010. Aim of this talktorialIn this notebook, we will learn more about the ChEMBL database and how to extract data from ChEMBL, i.e. (compound, activity data) pairs for a target of interest. These data sets can be used for many cheminformatics tasks, such as similarity search, clustering or machine learning.Our work here will include finding compounds which were tested against a certain target and filtering available bioactivity data. Contents in *Theory** ChEMBL database * ChEMBL web services * ChEMBL webresource client* Compound activity measures * IC50 measure * pIC50 value Contents in *Practical* **Goal: Get a list of compounds with bioactivity data for a given target*** Connect to ChEMBL database* Get target data (example: EGFR kinase) * Fetch and download target data * Select target ChEMBL ID* Get bioactivity data * Fetch and download bioactivity data for target * Preprocess and filter bioactivity data* Get compound data * Fetch and download compound data * Preprocess and filter compound data* Output bioactivity-compound data * Merge bioactivity and compound data, and add pIC50 values * Draw molecules with highest pIC50 * Freeze bioactivity data to ChEMBL 27 * Write output file References* ChEMBL bioactivity database: [Gaulton *et al.*, Nucleic Acids Res. (2017), 45(Database issue), D945–D954](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5210557/)* ChEMBL web services: [Davies *et al.*, Nucleic Acids Res. (2015), 43, 612-620](https://academic.oup.com/nar/article/43/W1/W612/2467881) * [ChEMBL web-interface](https://www.ebi.ac.uk/chembl/)* GitHub [ChEMBL webrescource client](https://github.com/chembl/chembl_webresource_client)* The EBI RDF platform: [Jupp *et al.*, Bioinformatics (2014), 30(9), 1338-9](https://www.ncbi.nlm.nih.gov/pubmed/24413672)* Info on half maximal inhibitory concentration: [(p)IC50](https://en.wikipedia.org/wiki/IC50)* [UniProt website](https://www.uniprot.org/) Theory ChEMBL database>"ChEMBL is a manually curated database of bioactive molecules with drug-like properties. It brings together chemical, bioactivity and genomic data to aid the translation of genomic information into effective new drugs." ([ChEMBL website](https://www.ebi.ac.uk/chembl/))* Open large-scale bioactivity database* **Current data content (as of 09.2020, ChEMBL 27):** * \>1.9 million distinct compounds * \>16 million activity values * Assays are mapped to ~13,000 targets* **Data sources** include scientific literature, PubChem bioassays, Drugs for Neglected Diseases Initiative (DNDi), BindingDB database, ...* ChEMBL data can be accessed via a [web-interface](https://www.ebi.ac.uk/chembl/), the [EBI-RDF platform](https://www.ncbi.nlm.nih.gov/pubmed/24413672) and the [ChEMBL webrescource client](https://github.com/chembl/chembl_webresource_client) ChEMBL web services* RESTful web service* ChEMBL web service version 2.x resource schema: *Figure 1:* "[ChEMBL web service schema diagram](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489243/figure/F2/). The oval shapes represent ChEMBL web service resources and the line between two resources indicates that they share a common attribute. The arrow direction shows where the primary information about a resource type can be found. A dashed line indicates the relationship between two resources behaves differently. For example, the `Image` resource provides a graphical based representation of a `Molecule`."Figure and description taken from: [Nucleic Acids Res. (2015), 43, 612-620](https://academic.oup.com/nar/article/43/W1/W612/2467881). ChEMBL webresource client* Python client library for accessing ChEMBL data* Handles interaction with the HTTPS protocol* Lazy evaluation of results -> reduced number of network requests Compound activity measures IC50 measure* [Half maximal inhibitory concentration](https://en.wikipedia.org/wiki/IC50)* Indicates how much of a particular drug or other substance is needed to inhibit a given biological process by half*Figure 2:* Visual demonstration of how to derive an IC50 value: (i) Arrange inhibition data on y-axis and log(concentration) on x-axis. (ii) Identify maximum and minimum inhibition. (iii) The IC50 is the concentration at which the curve passes through the 50% inhibition level. Figure ["Example IC50 curve demonstrating visually how IC50 is derived"](https://en.wikipedia.org/wiki/IC50/media/File:Example_IC50_curve_demonstrating_visually_how_IC50_is_derived.png) by JesseAlanGordon is licensed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). pIC50 value* To facilitate the comparison of IC50 values, which have a large value range and are given in different units (M, nM, ...), often pIC50 values are used* The pIC50 is the negative log of the IC50 value when converted to molar units: $ pIC_{50} = -log_{10}(IC_{50}) $, where $ IC_{50}$ is specified in units of M* Higher pIC50 values indicate exponentially greater potency of the drug* Note that the conversion can be adapted to the respective IC50 unit, e.g. for nM: $pIC_{50} = -log_{10}(IC_{50}*10^{-9})= 9-log_{10}(IC_{50}) $Other activity measures:Besides, IC50 and pIC50, other bioactivity measures are used, such as the equilibrium constant [KI](https://en.wikipedia.org/wiki/Equilibrium_constant) and the half maximal effective concentration [EC50](https://en.wikipedia.org/wiki/EC50). PracticalIn the following, we want to download all molecules that have been tested against our target of interest, the **epidermal growth factor receptor** ([**EGFR**](https://www.uniprot.org/uniprot/P00533)) kinase. Connect to ChEMBL database First, the ChEMBL webresource client as well as other Python libraries are imported.
###Code
import math
from pathlib import Path
from zipfile import ZipFile
from tempfile import TemporaryDirectory
import numpy as np
import pandas as pd
from rdkit.Chem import PandasTools
from chembl_webresource_client.new_client import new_client
from tqdm.auto import tqdm
HERE = Path(_dh[-1])
DATA = HERE / "data"
###Output
_____no_output_____
###Markdown
Next, we create resource objects for API access.
###Code
targets_api = new_client.target
compounds_api = new_client.molecule
bioactivities_api = new_client.activity
type(targets_api)
###Output
_____no_output_____
###Markdown
Get target data (EGFR kinase)* Get UniProt ID of the target of interest (EGFR kinase: [P00533](http://www.uniprot.org/uniprot/P00533)) from [UniProt website](https://www.uniprot.org/)* Use UniProt ID to get target informationSelect a different UniProt ID, if you are interested in another target.
###Code
uniprot_id = "P00533"
###Output
_____no_output_____
###Markdown
Fetch target data from ChEMBL
###Code
# Get target information from ChEMBL but restrict it to specified values only
targets = targets_api.get(target_components__accession=uniprot_id).only(
"target_chembl_id", "organism", "pref_name", "target_type"
)
print(f'The type of the targets is "{type(targets)}"')
###Output
The type of the targets is "<class 'chembl_webresource_client.query_set.QuerySet'>"
###Markdown
Download target data from ChEMBLThe results of the query are stored in `targets`, a `QuerySet`, i.e. the results are not fetched from ChEMBL until we ask for it (here using `pandas.DataFrame.from_records`).More information about the `QuerySet` datatype:> QuerySets are lazy – the act of creating a QuerySet does not involve any database activity. You can stack filters together all day long, and Django will actually not run the query until the QuerySet is evaluated. ([querysets-are-lazy](https://docs.djangoproject.com/en/3.0/topics/db/queries/querysets-are-lazy))
###Code
targets = pd.DataFrame.from_records(targets)
targets
###Output
_____no_output_____
###Markdown
Select target (target ChEMBL ID)After checking the entries, we select the first entry as our target of interest:`CHEMBL203`: It is a single protein and represents the human Epidermal growth factor receptor (EGFR, also named erbB1)
###Code
target = targets.iloc[0]
target
###Output
_____no_output_____
###Markdown
Save selected ChEMBL ID.
###Code
chembl_id = target.target_chembl_id
print(f"The target ChEMBL ID is {chembl_id}")
# NBVAL_CHECK_OUTPUT
###Output
The target ChEMBL ID is CHEMBL203
###Markdown
Get bioactivity dataNow, we want to query bioactivity data for the target of interest. Fetch bioactivity data for the target from ChEMBL In this step, we fetch the bioactivity data and filter it to only consider* human proteins, * bioactivity type IC50, * exact measurements (relation `'='`), and* binding data (assay type `'B'`).
###Code
bioactivities = bioactivities_api.filter(
target_chembl_id=chembl_id, type="IC50", relation="=", assay_type="B"
).only(
"activity_id",
"assay_chembl_id",
"assay_description",
"assay_type",
"molecule_chembl_id",
"type",
"standard_units",
"relation",
"standard_value",
"target_chembl_id",
"target_organism",
)
print(f"Length and type of bioactivities object: {len(bioactivities)}, {type(bioactivities)}")
###Output
Length and type of bioactivities object: 8816, <class 'chembl_webresource_client.query_set.QuerySet'>
###Markdown
Each entry in our bioactivity set holds the following information:
###Code
print(f"Length and type of first element: {len(bioactivities[0])}, {type(bioactivities[0])}")
bioactivities[0]
###Output
Length and type of first element: 13, <class 'dict'>
###Markdown
Download bioactivity data from ChEMBL Finally, we download the `QuerySet` in the form of a `pandas` `DataFrame`. > **Note**: This step should not take more than 2 minutes, if so try to rerun all cells starting from _"Fetch bioactivity data for the target from ChEMBL"_ or read this message below: Load a local version of the data (in case you encounter any problems while fetching the data) If you experience difficulties to query the ChEMBL database, we also provide the resulting dataframe you will construct in the cell below. If you want to use the saved version, use the following code instead to obtain `bioactivities_df`: ```python replace first line in cell below with this other linebioactivities_df = pd.read_csv(DATA / "EGFR_bioactivities_CHEMBL27.csv.zip", index_col=0)```
###Code
bioactivities_df = pd.DataFrame.from_records(bioactivities)
print(f"DataFrame shape: {bioactivities_df.shape}")
bioactivities_df.head()
###Output
DataFrame shape: (8817, 13)
###Markdown
Note that the first two rows describe the same bioactivity entry; we will remove such artifacts later during the deduplication step. Note also that we have columns for `standard_units`/`units` and `standard_values`/`values`; in the following, we will use the standardized columns (standardization by ChEMBL), and thus, we drop the other two columns.If we used the `units` and `values` columns, we would need to convert all values with many different units to nM:
###Code
bioactivities_df["units"].unique()
bioactivities_df.drop(["units", "value"], axis=1, inplace=True)
bioactivities_df.head()
###Output
_____no_output_____
###Markdown
Preprocess and filter bioactivity data1. Convert `standard_value`'s datatype from `object` to `float`2. Delete entries with missing values3. Keep only entries with `standard_unit == nM`4. Delete duplicate molecules5. Reset `DataFrame` index6. Rename columns **1. Convert datatype of "standard_value" from "object" to "float"**The field `standard_value` holds standardized (here IC50) values. In order to make these values usable in calculations later on, convert values to floats.
###Code
bioactivities_df.dtypes
bioactivities_df = bioactivities_df.astype({"standard_value": "float64"})
bioactivities_df.dtypes
###Output
_____no_output_____
###Markdown
**2. Delete entries with missing values**Use the parameter `inplace=True` to drop values in the current `DataFrame` directly.
###Code
bioactivities_df.dropna(axis=0, how="any", inplace=True)
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (8816, 11)
###Markdown
**3. Keep only entries with "standard_unit == nM"** We only want to keep bioactivity entries in `nM`, thus we remove all entries with other units.
###Code
print(f"Units in downloaded data: {bioactivities_df['standard_units'].unique()}")
print(
f"Number of non-nM entries:\
{bioactivities_df[bioactivities_df['standard_units'] != 'nM'].shape[0]}"
)
bioactivities_df = bioactivities_df[bioactivities_df["standard_units"] == "nM"]
print(f"Units after filtering: {bioactivities_df['standard_units'].unique()}")
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (8747, 11)
###Markdown
**4. Delete duplicate molecules**Sometimes the same molecule (`molecule_chembl_id`) has been tested more than once, in this case, we only keep the first one.Note other choices could be to keep the one with the best value or a mean value of all assay results for the respective compound.
###Code
bioactivities_df.drop_duplicates("molecule_chembl_id", keep="first", inplace=True)
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (6059, 11)
###Markdown
**5. Reset "DataFrame" index**Since we deleted some rows, but we want to iterate over the index later, we reset the index to be continuous.
###Code
bioactivities_df.reset_index(drop=True, inplace=True)
bioactivities_df.head()
###Output
_____no_output_____
###Markdown
**6. Rename columns**
###Code
bioactivities_df.rename(
columns={"standard_value": "IC50", "standard_units": "units"}, inplace=True
)
bioactivities_df.head()
print(f"DataFrame shape: {bioactivities_df.shape}")
###Output
DataFrame shape: (6059, 11)
###Markdown
We now have a set of **5575** molecule ids with respective IC50 values for our target kinase. Get compound dataWe have a `DataFrame` containing all molecules tested against EGFR (with the respective measured bioactivity). Now, we want to get the molecular structures of the molecules that are linked to respective bioactivity ChEMBL IDs. Fetch compound data from ChEMBLLet's have a look at the compounds from ChEMBL which we have defined bioactivity data for: We fetch compound ChEMBL IDs and structures for the compounds linked to our filtered bioactivity data.
###Code
compounds_provider = compounds_api.filter(
molecule_chembl_id__in=list(bioactivities_df["molecule_chembl_id"])
).only("molecule_chembl_id", "molecule_structures")
###Output
_____no_output_____
###Markdown
Download compound data from ChEMBLAgain, we want to export the `QuerySet` object into a `pandas.DataFrame`. Given the data volume, **this can take some time.** For that reason, we will first obtain the list of records through `tqdm`, so we get a nice progress bar and some ETAs. We can then pass the list of compounds to the DataFrame.
###Code
compounds = list(tqdm(compounds_provider))
compounds_df = pd.DataFrame.from_records(
compounds,
)
print(f"DataFrame shape: {compounds_df.shape}")
compounds_df.head()
###Output
_____no_output_____
###Markdown
Preprocess and filter compound data1. Remove entries with missing entries2. Delete duplicate molecules (by molecule_chembl_id)3. Get molecules with canonical SMILES **1. Remove entries with missing molecule structure entry**
###Code
compounds_df.dropna(axis=0, how="any", inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (6052, 2)
###Markdown
**2. Delete duplicate molecules**
###Code
compounds_df.drop_duplicates("molecule_chembl_id", keep="first", inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (6052, 2)
###Markdown
**3. Get molecules with canonical SMILES**So far, we have multiple different molecular structure representations. We only want to keep the canonical SMILES.
###Code
compounds_df.iloc[0].molecule_structures.keys()
canonical_smiles = []
for i, compounds in compounds_df.iterrows():
try:
canonical_smiles.append(compounds["molecule_structures"]["canonical_smiles"])
except KeyError:
canonical_smiles.append(None)
compounds_df["smiles"] = canonical_smiles
compounds_df.drop("molecule_structures", axis=1, inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (6052, 2)
###Markdown
Sanity check: Remove all molecules without a canonical SMILES string.
###Code
compounds_df.dropna(axis=0, how="any", inplace=True)
print(f"DataFrame shape: {compounds_df.shape}")
###Output
DataFrame shape: (6052, 2)
###Markdown
Output (bioactivity-compound) data**Summary of compound and bioactivity data**
###Code
print(f"Bioactivities filtered: {bioactivities_df.shape[0]}")
bioactivities_df.columns
print(f"Compounds filtered: {compounds_df.shape[0]}")
compounds_df.columns
###Output
Compounds filtered: 6052
###Markdown
Merge both datasetsMerge values of interest from `bioactivities_df` and `compounds_df` in an `output_df` based on the compounds' ChEMBL IDs (`molecule_chembl_id`), keeping the following columns:* ChEMBL IDs: `molecule_chembl_id`* SMILES: `smiles`* units: `units`* IC50: `IC50`
###Code
# Merge DataFrames
output_df = pd.merge(
bioactivities_df[["molecule_chembl_id", "IC50", "units"]],
compounds_df,
on="molecule_chembl_id",
)
# Reset row indices
output_df.reset_index(drop=True, inplace=True)
print(f"Dataset with {output_df.shape[0]} entries.")
output_df.dtypes
output_df.head(10)
###Output
_____no_output_____
###Markdown
Add pIC50 values As you can see the low IC50 values are difficult to read (values are distributed over multiple scales), which is why we convert the IC50 values to pIC50.
###Code
def convert_ic50_to_pic50(IC50_value):
pIC50_value = 9 - math.log10(IC50_value)
return pIC50_value
# Apply conversion to each row of the compounds DataFrame
output_df["pIC50"] = output_df.apply(lambda x: convert_ic50_to_pic50(x.IC50), axis=1)
output_df.head()
###Output
_____no_output_____
###Markdown
Draw compound dataLet's have a look at our collected data set.First, we plot the pIC50 value distribution
###Code
output_df.hist(column="pIC50")
###Output
_____no_output_____
###Markdown
In the next steps, we add a column for RDKit molecule objects to our `DataFrame` and look at the structures of the molecules with the highest pIC50 values.
###Code
# Add molecule column
PandasTools.AddMoleculeColumnToFrame(output_df, smilesCol="smiles")
# Sort molecules by pIC50
output_df.sort_values(by="pIC50", ascending=False, inplace=True)
# Reset index
output_df.reset_index(drop=True, inplace=True)
###Output
_____no_output_____
###Markdown
Show the three most active molecules, i.e. molecules with the highest pIC50 values.
###Code
output_df.drop("smiles", axis=1).head(3)
# Prepare saving the dataset: Drop the ROMol column
output_df = output_df.drop("ROMol", axis=1)
print(f"DataFrame shape: {output_df.shape}")
###Output
DataFrame shape: (6052, 5)
###Markdown
Freeze output data to ChEMBL 27This is a technical step: Usually, we would continue to work with the dataset that we just created (latest dataset). However, here on the TeachOpenCADD platform, we prefer to freeze the dataset to a certain ChEMBL releases (i.e. [ChEMBL 27](http://doi.org/10.6019/CHEMBL.database.27)), so that this talktorial and other talktorials downstream in our CADD pipeline do not change in the future (helping us to maintain the talktorials). Note: If you prefer to run this notebook on the latest dataset or if you want to use it for another target, please comment the cell below.
###Code
# Disable this cell to unfreeze the dataset
output_df = pd.read_csv(
DATA / "EGFR_compounds_ea055ef.csv", index_col=0, float_precision="round_trip"
)
output_df.head()
print(f"DataFrame shape: {output_df.shape}")
# NBVAL_CHECK_OUTPUT
###Output
DataFrame shape: (5568, 5)
###Markdown
Write output data to fileWe want to use this bioactivity-compound dataset in the following talktorials, thus we save the data as `csv` file. Note that it is advisable to drop the molecule column (which only contains an image of the molecules) when saving the data.
###Code
output_df.to_csv(DATA / "EGFR_compounds.csv")
output_df.head()
print(f"DataFrame shape: {output_df.shape}")
# NBVAL_CHECK_OUTPUT
###Output
DataFrame shape: (5568, 5)
|
.ipynb_checkpoints/case1emilrantanen-checkpoint.ipynb | ###Markdown
Case 1. Heart Disease Classification Cognitive Systems for Health Technology Applications 3.2.2019, Emil Rantanen ja Wille Tuovinen Metropolia University of Applied Sciences This is the code made for the Case 1 exercise of the Cognitive Systems for Health Technology applications. Due to lack of experience in the subject, it is mainly based on the teachers drafty notes example. While me and Wille signed up to work as a team, we ended up making out own codes this time to get used to the environment. As such most of the time was spent around tinkering with the code and figuring out what different parts do.Link to the drafty notes can be found here: https://github.com/sakluk/cognitive-systems-for-health-technology/blob/master/Week_2_Case_1_(drafty_notes).ipynb
###Code
import time
import warnings
import pandas as pd
import matplotlib.pyplot as plt
from pylab import *
from sklearn.utils import shuffle
from sklearn.preprocessing import normalize
from sklearn.metrics import confusion_matrix, precision_recall_fscore_support
from keras.utils import to_categorical
from keras import models, layers
from keras.models import Sequential
from keras.layers import Dense, Activation
###Output
_____no_output_____
###Markdown
Reading the fileIn this first section the data file is read and we give the columns their proper names. Empty parts are filled with the median value of their respective column. The rows are then shuffled and we check that the data looks like what me might expect.
###Code
#reading the file and applying proper names for the columns
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data'
df = pd.read_csv(url,header=None,index_col=None,sep=',',na_values='?')
df.columns = ['age','sex','cp','trestbps','chol','fbs','restecg','thalach',
'exang','oldpeak','slope','ca','thal','num']
#fill missing data with respective column's median value, then shuffle the rows
df = df.fillna(df.median())
df = shuffle(df)
df.describe()
###Output
_____no_output_____
###Markdown
Preparing for trainingIn this section we prepare the data for training. First we separate the input from the output and normalize it. We make it so that a person in the data is either diseased or healthy. Then we pick 200 rows for training and the rest are used for validation.
###Code
#separate the data from the disease diagnosis
dataList = ['age','sex','cp','trestbps','chol','fbs','restecg','thalach',
'exang','oldpeak','slope','ca','thal']
data = df[dataList]
dataMin = data.min()
dataMax = data.max()
dataNorm = (data - dataMin)/(dataMax - dataMin)
labels = df['num']
#make it so diagnosis is either 1 or 0
labels = 1.0*(labels>0.0)
labelsOnehot = to_categorical(labels)
#200 rows for training, rest (103) for validation
trainData = dataNorm[:200]
valData = dataNorm[200:]
trainLabels = labelsOnehot[:200]
valLabels = labelsOnehot[200:]
print('Shape of')
print(' full data: ',dataNorm.shape)
print(' train data: ',trainData.shape)
print(' validation data: ',valData.shape)
print(' one-hot-labels: ',labelsOnehot.shape)
print(' train labels: ',trainLabels.shape)
print(' validation labels:',valLabels.shape)
###Output
Shape of
full data: (303, 13)
train data: (200, 13)
validation data: (103, 13)
one-hot-labels: (303, 2)
train labels: (200, 2)
validation labels: (103, 2)
###Markdown
Building and compiling the modelHere we build and compile the model. I need to experiment with this more in the coming weeks, as a lot of this still an unknown area for me.
###Code
#Building the model
model = Sequential()
model.add(layers.Dense(9, activation='relu', input_shape=(13,)))
model.add(layers.Dense(2, activation='softmax'))
model.summary()
#...and compiling it
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
# Fit the model with the data and keep record on elapsed time
trainStart = time.time()
history = model.fit(trainData, trainLabels,
epochs = 50,
batch_size = 10,
verbose = 0,
validation_data = (valData, valLabels))
trainEnd = time.time()
print('Elapsed time: {:.2f} seconds'.format(trainEnd - trainStart))
###Output
Elapsed time: 1.10 seconds
###Markdown
Plotting figuresWe get the data from the training part and then plot figures for training loss and accuracy. I tried different amounts for epochs and batch sizes, but this amount seemed to give the best results.
###Code
#get the data from the training
history_dict = history.history
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
epochs = range(1, len(loss_values) + 1)
#plotting a figure for training loss
plt.figure()
plt.plot(epochs, loss_values, 'b', label='Training loss')
plt.plot(epochs, val_loss_values, 'darkorange', label='Validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.ylim([0, 2])
plt.legend()
plt.show()
#plotting a figure for training accuracy
plt.figure()
plt.plot(epochs, acc_values, 'b', label='Training acc')
plt.plot(epochs, val_acc_values, 'darkorange', label='Validation acc')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.ylim([0, 1])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Confusion matrixHere we print the total accuracy and best guess accuracy. Confusion matrix on total accuracy seems to have gotten good amount of true positives and negatives.
###Code
# Print total accuracy and confusion matrix
valPredicts = model.predict(dataNorm)
y_pred = argmax(valPredicts, axis = 1)
# Best guess = Guess that all are normal
simpleGuess = zeros(len(y_pred))
cm0 = confusion_matrix(labels, simpleGuess)
true0 = np.trace(cm0)
N = len(labels)
acc0 = true0/N
print('Simple guess accuracy: {:.4f}'.format(acc0))
print('Confusion matrix:')
print(cm0)
#printing total accuracy
print(' ')
cm1 = confusion_matrix(labels, y_pred)
true1 = np.trace(cm1)
N = len(labels)
acc1 = true1/N
print('Total accuracy: {:.4f}'.format(acc1))
print('Confusion matrix:')
print(cm1)
###Output
Simple guess accuracy: 0.5413
Confusion matrix:
[[164 0]
[139 0]]
Total accuracy: 0.8515
Confusion matrix:
[[145 19]
[ 26 113]]
|
code/chapter07_optimization/7.3_minibatch-sgd.ipynb | ###Markdown
7.3 小批量随机梯度下降
###Code
%matplotlib inline
import numpy as np
import time
import torch
from torch import nn, optim
import sys
sys.path.append("..")
import d2lzh_pytorch as d2l
print(torch.__version__)
###Output
1.0.0
###Markdown
7.3.1 读取数据
###Code
def get_data_ch7(): # 本函数已保存在d2lzh_pytorch包中方便以后使用
data = np.genfromtxt('../../data/airfoil_self_noise.dat', delimiter='\t')
data = (data - data.mean(axis=0)) / data.std(axis=0) # 标准化
return torch.tensor(data[:1500, :-1], dtype=torch.float32), \
torch.tensor(data[:1500, -1], dtype=torch.float32) # 前1500个样本(每个样本5个特征)
features, labels = get_data_ch7()
features.shape
###Output
_____no_output_____
###Markdown
7.3.2 从零开始实现
###Code
def sgd(params, states, hyperparams):
for p in params:
p.data -= hyperparams['lr'] * p.grad.data
# 本函数已保存在d2lzh_pytorch包中方便以后使用
def train_ch7(optimizer_fn, states, hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net, loss = d2l.linreg, d2l.squared_loss
w = torch.nn.Parameter(torch.tensor(np.random.normal(0, 0.01, size=(features.shape[1], 1)), dtype=torch.float32),
requires_grad=True)
b = torch.nn.Parameter(torch.zeros(1, dtype=torch.float32), requires_grad=True)
def eval_loss():
return loss(net(features, w, b), labels).mean().item()
ls = [eval_loss()]
data_iter = torch.utils.data.DataLoader(
torch.utils.data.TensorDataset(features, labels), batch_size, shuffle=True)
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
l = loss(net(X, w, b), y).mean() # 使用平均损失
# 梯度清零
if w.grad is not None:
w.grad.data.zero_()
b.grad.data.zero_()
l.backward()
optimizer_fn([w, b], states, hyperparams) # 迭代模型参数
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100个样本记录下当前训练误差
# 打印结果和作图
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
d2l.set_figsize()
d2l.plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
d2l.plt.xlabel('epoch')
d2l.plt.ylabel('loss')
def train_sgd(lr, batch_size, num_epochs=2):
train_ch7(sgd, None, {'lr': lr}, features, labels, batch_size, num_epochs)
train_sgd(1, 1500, 6)
train_sgd(0.005, 1)
train_sgd(0.05, 10)
###Output
loss: 0.245523, 0.050718 sec per epoch
###Markdown
7.3.3 简洁实现
###Code
# 本函数与原书不同的是这里第一个参数优化器函数而不是优化器的名字
# 例如: optimizer_fn=torch.optim.SGD, optimizer_hyperparams={"lr": 0.05}
def train_pytorch_ch7(optimizer_fn, optimizer_hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net = nn.Sequential(
nn.Linear(features.shape[-1], 1)
)
loss = nn.MSELoss()
optimizer = optimizer_fn(net.parameters(), **optimizer_hyperparams)
def eval_loss():
return loss(net(features).view(-1), labels).item() / 2
ls = [eval_loss()]
data_iter = torch.utils.data.DataLoader(
torch.utils.data.TensorDataset(features, labels), batch_size, shuffle=True)
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
# 除以2是为了和train_ch7保持一致, 因为squared_loss中除了2
l = loss(net(X).view(-1), y) / 2
optimizer.zero_grad()
l.backward()
optimizer.step()
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss())
# 打印结果和作图
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
d2l.set_figsize()
d2l.plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
d2l.plt.xlabel('epoch')
d2l.plt.ylabel('loss')
train_pytorch_ch7(optim.SGD, {"lr": 0.05}, features, labels, 10)
###Output
loss: 0.245491, 0.044150 sec per epoch
###Markdown
7.3 小批量随机梯度下降
###Code
%matplotlib inline
import numpy as np
import time
import torch
from torch import nn, optim
import sys
sys.path.append("..")
import d2lzh_pytorch as d2l
print(torch.__version__)
###Output
1.2.0+cpu
###Markdown
7.3.1 读取数据
###Code
def get_data_ch7(): # 本函数已保存在d2lzh_pytorch包中方便以后使用
data = np.genfromtxt('../../data/airfoil_self_noise.dat', delimiter='\t')
data = (data - data.mean(axis=0)) / data.std(axis=0) # 标准化
return torch.tensor(data[:1500, :-1], dtype=torch.float32), \
torch.tensor(data[:1500, -1], dtype=torch.float32) # 前1500个样本(每个样本5个特征)
features, labels = get_data_ch7()
features.shape
features
###Output
_____no_output_____
###Markdown
7.3.2 从零开始实现
###Code
def sgd(params, states, hyperparams):
for p in params:
p.data -= hyperparams['lr'] * p.grad.data
# 本函数已保存在d2lzh_pytorch包中方便以后使用
def train_ch7(optimizer_fn, states, hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net, loss = d2l.linreg, d2l.squared_loss
w = torch.nn.Parameter(torch.tensor(np.random.normal(0, 0.01, size=(features.shape[1], 1)), dtype=torch.float32),
requires_grad=True)
b = torch.nn.Parameter(torch.zeros(1, dtype=torch.float32), requires_grad=True)
def eval_loss():
return loss(net(features, w, b), labels).mean().item()
ls = [eval_loss()]
data_iter = torch.utils.data.DataLoader(
torch.utils.data.TensorDataset(features, labels), batch_size, shuffle=True)
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
l = loss(net(X, w, b), y).mean() # 使用平均损失
# 梯度清零
if w.grad is not None:
w.grad.data.zero_()
b.grad.data.zero_()
l.backward()
optimizer_fn([w, b], states, hyperparams) # 迭代模型参数
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100个样本记录下当前训练误差
# 打印结果和作图
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
d2l.set_figsize()
d2l.plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
d2l.plt.xlabel('epoch')
d2l.plt.ylabel('loss')
def train_sgd(lr, batch_size, num_epochs=2):
train_ch7(sgd, None, {'lr': lr}, features, labels, batch_size, num_epochs)
train_sgd(1, 1500, 6)
train_sgd(0.005, 1)
train_sgd(0.05, 16,25)
###Output
loss: 0.243131, 0.035000 sec per epoch
###Markdown
7.3.3 简洁实现
###Code
# 本函数与原书不同的是这里第一个参数优化器函数而不是优化器的名字
# 例如: optimizer_fn=torch.optim.SGD, optimizer_hyperparams={"lr": 0.05}
def train_pytorch_ch7(optimizer_fn, optimizer_hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net = nn.Sequential(
nn.Linear(features.shape[-1], 1)
)
loss = nn.MSELoss()
optimizer = optimizer_fn(net.parameters(), **optimizer_hyperparams)
def eval_loss():
return loss(net(features).view(-1), labels).item() / 2
ls = [eval_loss()]
data_iter = torch.utils.data.DataLoader(
torch.utils.data.TensorDataset(features, labels), batch_size, shuffle=True)
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
# 除以2是为了和train_ch7保持一致, 因为squared_loss中除了2
l = loss(net(X).view(-1), y) / 2
optimizer.zero_grad()
l.backward()
optimizer.step()
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss())
# 打印结果和作图
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
d2l.set_figsize()
d2l.plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
d2l.plt.xlabel('epoch')
d2l.plt.ylabel('loss')
train_pytorch_ch7(optim.SGD, {"lr": 0.05}, features, labels, 10)
torch.optim.RMSprop
###Output
_____no_output_____
###Markdown
7.3.2. 从零开始实现
###Code
def sgd(params, states,hyperparams,grads):
for i,p in enumerate(params):
p.assign_sub(hyperparams['lr'] * grads[i])
###Output
_____no_output_____
###Markdown
“线性回归的从零开始实现”一节中已经实现过小批量随机梯度下降算法。我们在这里将它的输入参数变得更加通用,主要是为了方便本章后面介绍的其他优化算法也可以使用同样的输入。具体来说,我们添加了一个状态输入states并将超参数放在字典hyperparams里。此外,我们将在训练函数里对各个小批量样本的损失求平均,因此优化算法里的梯度不需要除以批量大小。
###Code
# 本函数已保存在d2lzh_tensprflow2包中方便以后使用
def train_ch7(optimizer_fn, states, hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net, loss = d2l.linreg, d2l.squared_loss
w = tf.Variable(np.random.normal(0, 0.01, size=(features.shape[1], 1)), dtype=tf.float32)
b = tf.Variable(tf.zeros(1,dtype=tf.float32))
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features, w, b), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features,labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X, w, b), y)) # 使用平均损失
grads = tape.gradient(l, [w,b])
optimizer_fn([w, b], states, hyperparams,grads) # 迭代模型参数
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100个样本记录下当前训练误差
# 打印结果和作图
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
d2l.set_figsize()
d2l.plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
d2l.plt.xlabel('epoch')
d2l.plt.ylabel('loss')
def train_sgd(lr, batch_size, num_epochs=2):
train_ch7(sgd, None, {'lr': lr}, features, labels, batch_size, num_epochs)
train_sgd(1, 1500, 6)
train_sgd(0.005, 1)
train_sgd(0.05, 10)
###Output
loss: 0.246043, 0.863296 sec per epoch
###Markdown
7.3.3. 简洁实现 同样,我们也无须自己实现小批量随机梯度下降算法。tensorflow.keras.optimizers 模块提供了很多常用的优化算法比如SGD、Adam和RMSProp等。下面我们创建一个用于优化model 所有参数的优化器实例,并指定学习率为0.05的小批量随机梯度下降(SGD)为优化算法。
###Code
from tensorflow.keras import optimizers
trainer = optimizers.SGD(learning_rate=0.05)
# 本函数已保存在d2lzh_tensorflow2包中方便以后使用,事实上用不到trainer_hyperparams这个参数,这样写是为了和原书保持一致
def train_tensorflow2_ch7(trainer_name, trainer_hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net = tf.keras.Sequential()
net.add(tf.keras.layers.Dense(1))
loss = tf.losses.MeanSquaredError()
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features,labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
# 创建Trainer实例来迭代模型参数
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X), y)) # 使用平均损失
grads = tape.gradient(l, net.trainable_variables)
trainer.apply_gradients(zip(grads, net.trainable_variables)) # 迭代模型参数
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100个样本记录下当前训练误差
# 打印结果和作图
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
d2l.set_figsize()
d2l.plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
d2l.plt.xlabel('epoch')
d2l.plt.ylabel('loss')
train_tensorflow2_ch7('trainer', {'learning_rate': 0.05}, features, labels, 10)
###Output
loss: 0.532480, 1.300436 sec per epoch
|
examples/toy_model_mstis/toy_mstis_A2_split_analysis.ipynb | ###Markdown
Analyzing a split MSTIS simulationIncluded in this notebook:* Opening split files and look at the data
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import openpathsampling as paths
import numpy as np
%%time
storage = paths.AnalysisStorage('mstis_data.nc')
###Output
CPU times: user 7.65 s, sys: 127 ms, total: 7.78 s
Wall time: 7.78 s
###Markdown
Analyze the rate with no snapshots present in the analyzed file
###Code
mstis = storage.networks.load(0)
mstis.hist_args['max_lambda'] = { 'bin_width' : 0.02, 'bin_range' : (0.0, 0.5) }
mstis.hist_args['pathlength'] = { 'bin_width' : 5, 'bin_range' : (0, 150) }
%%time
mstis.rate_matrix(storage.steps, force=True)
###Output
CPU times: user 4.87 s, sys: 245 ms, total: 5.12 s
Wall time: 4.96 s
###Markdown
Move scheme analysis
###Code
scheme = storage.schemes[0]
scheme.move_summary(storage.steps)
###Output
Null moves for 1 cycles. Excluding null moves:
ms_outer_shooting ran 4.500% (expected 4.98%) of the cycles with acceptance 21/27 (77.78%)
repex ran 20.667% (expected 22.39%) of the cycles with acceptance 49/124 (39.52%)
shooting ran 47.333% (expected 44.78%) of the cycles with acceptance 207/284 (72.89%)
minus ran 2.500% (expected 2.99%) of the cycles with acceptance 11/15 (73.33%)
pathreversal ran 25.000% (expected 24.88%) of the cycles with acceptance 99/150 (66.00%)
###Markdown
Replica move history tree
###Code
import openpathsampling.visualize as vis
reload(vis)
from IPython.display import SVG
tree = vis.PathTree(
storage.steps[0:200],
vis.ReplicaEvolution(replica=2, accepted=False)
)
SVG(tree.svg())
decorrelated = tree.generator.decorrelated
print "We have " + str(len(decorrelated)) + " decorrelated trajectories."
###Output
We have 3 decorrelated trajectories.
###Markdown
Visualizing trajectories
###Code
from toy_plot_helpers import ToyPlot
background = ToyPlot()
background.contour_range = np.arange(-1.5, 1.0, 0.1)
background.add_pes(storage.engines[0].pes)
xval = paths.FunctionCV("xval", lambda snap : snap.xyz[0][0])
yval = paths.FunctionCV("yval", lambda snap : snap.xyz[0][1])
live_vis = paths.StepVisualizer2D(mstis, xval, yval, [-1.0, 1.0], [-1.0, 1.0])
live_vis.background = background.plot()
###Output
_____no_output_____
###Markdown
to make this work we need the actual snapshot coordinates! These are notpresent in the data file anymore so we attach the traj as a fallback.We are not using analysis storage since we do not cache anything.
###Code
storage.cvs
fallback = paths.Storage('mstis_traj.nc', 'r')
storage.fallback = fallback
live_vis.draw_samples(list(tree.samples))
###Output
_____no_output_____ |
author_initiations/InteractionCounts.ipynb | ###Markdown
Scratch Code - Computation of Interaction counts===For a table in the paper and some summary stats.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
import re
import pandas as pd
import numpy as np
from collections import Counter
import sqlite3
from tqdm import tqdm
import random
import pickle
from datetime import datetime
import matplotlib.pyplot as plt
import matplotlib.dates as md
import matplotlib
import pylab as pl
from IPython.core.display import display, HTML
import statsmodels.api as sm
import statsmodels.formula.api as smf
working_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/author_initiations"
assert os.path.exists(working_dir)
git_root_dir = !git rev-parse --show-toplevel
git_root_dir = git_root_dir[0]
figures_dir = os.path.join(git_root_dir, 'figures')
figures_dir
start_date = datetime.fromisoformat('2005-01-01')
start_timestamp = int(start_date.timestamp() * 1000)
end_date = datetime.fromisoformat('2016-06-01')
end_timestamp = int(end_date.timestamp() * 1000)
subset_start_date = datetime.fromisoformat('2014-01-01')
subset_start_timestamp = int(subset_start_date.timestamp() * 1000)
###Output
_____no_output_____
###Markdown
Read in the data
###Code
# load the list of valid users
data_selection_working_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/data_selection"
valid_user_ids = set()
with open(os.path.join(data_selection_working_dir, "valid_user_ids.txt"), 'r') as infile:
for line in infile:
user_id = line.strip()
if user_id == "":
continue
else:
valid_user_ids.add(int(user_id))
len(valid_user_ids)
# load the list of valid sites
data_selection_working_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/data_selection"
valid_site_ids = set()
with open(os.path.join(data_selection_working_dir, "valid_site_ids.txt"), 'r') as infile:
for line in infile:
site_id = line.strip()
if site_id == "":
continue
else:
valid_site_ids.add(int(site_id))
len(valid_site_ids)
# read the journal metadata with author type info added
s = datetime.now()
author_type_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/author_type"
journal_metadata_filepath = os.path.join(author_type_dir, "journal_metadata_with_author_type.df")
journal_df = pd.read_feather(journal_metadata_filepath)
print(datetime.now() - s)
len(journal_df)
# as a quick fix for invalid dates in journals, when created_at is 0 we use the updated_at instead
# note that only 41 updates have this issue
invalid_created_at = journal_df.created_at <= 0
journal_df.loc[invalid_created_at, 'created_at'] = journal_df.loc[invalid_created_at, 'updated_at']
health_cond_filepath = os.path.join("/home/srivbane/shared/caringbridge/data/projects/sna-social-support/user_metadata", "assigned_health_conditions.feather")
user_health_conds_df = pd.read_feather(health_cond_filepath)
len(user_health_conds_df)
# read the user author type dataframe
author_type_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/author_type"
user_patient_proportions_filepath = os.path.join(author_type_dir, 'user_patient_proportions.df')
user_df = pd.read_feather(user_patient_proportions_filepath)
len(user_df)
# read the user->user interactions dataframe
metadata_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/user_metadata"
u2u_df = pd.read_feather(os.path.join(metadata_dir,"u2u_df.feather"))
len(u2u_df)
# read in the interactions dataframe
metadata_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/user_metadata"
author_to_site = os.path.join(metadata_dir, "interaction_metadata.h5")
ints_df = pd.read_hdf(author_to_site)
len(ints_df)
ints_df.head()
Counter(ints_df.int_type).most_common()
Counter(ints_df[ints_df.site_id.isin(valid_site_ids)].int_type).most_common()
Counter(ints_df[~ints_df.is_self_interaction].int_type).most_common()
Counter(ints_df[~ints_df.is_self_interaction].drop_duplicates(subset=['user_id', 'site_id', 'int_type']).int_type).most_common(), \
len(ints_df[~ints_df.is_self_interaction].drop_duplicates(subset=['user_id', 'site_id']))
687432 + 267283 + 197188
unique = ints_df[~ints_df.is_self_interaction].sort_values(by='created_at').drop_duplicates(subset=['user_id', 'site_id'])
len(unique)
Counter(unique.int_type).most_common()
###Output
_____no_output_____ |
obsolete_scripts.dir/GL_postprocessing_geocoded_complex.ipynb | ###Markdown
Run through all images and save contours
###Code
n_test=500
ddir = os.path.expanduser('~/Google Drive File Stream/Shared drives/GROUNDING_LINE_TEAM_DRIVE/ML_Yara/geocoded_v1/')
sub_tdir = {}
sub_tdir['Train'] = os.path.join(ddir,'train_n%i.dir'%n_test)
sub_tdir['Test'] = os.path.join(ddir,'test_n%i.dir'%n_test)
#-- Get list of images
npy_list = {}
for t in ['Train','Test']:
tempfileList = os.listdir(sub_tdir[t])
npy_list[t] = [f for f in tempfileList if (f.endswith('.npy') and f.startswith('coco'))]
%who
#-- set threshold for getting contours
eps = 0.3
for t in ['Train']: #['Train','Test']
#-- get list of PREDICTION file names
file_list_name = [os.path.basename(f) for f in fileList[t]]
#-- Read images and save to file
for i,f in enumerate(npy_list[t]):
fig,ax = plt.subplots(2,3,figsize=(15,10))
img = np.load(os.path.join(sub_tdir[t],f))
lbl = binary_dilation(np.load(os.path.join(sub_tdir[t],f.replace('coco','delineation'))).reshape((img.shape[0],img.shape[1])))
#-- find corresponding prediction file
file_ind = file_list_name.index(f.replace('coco','pred').replace('.npy','.png'))
im = imageio.imread(fileList[t][file_ind]).astype(float)/255.
#-- close contour ends to make polygons
im[np.nonzero(im[:,0] > eps),0] = eps
im[np.nonzero(im[:,-1] > eps),-1] = eps
im[0,np.nonzero(im[0,:] > eps)] = eps
im[-1,np.nonzero(im[-1,:] > eps)] = eps
#- get contours
contours = measure.find_contours(im, eps)
#-- make contours into closed polyons to find pinning points
pols = [None]*len(contours)
for n, contour in enumerate(contours):
pols[n] = Polygon(zip(contour[:,0],contour[:,1]))
#-- intialize matrix of polygon containment
cmat = np.zeros((len(pols),len(pols)),dtype=bool)
for i in range(len(pols)):
for j in range(len(pols)):
if (i != j) and pols[i].contains(pols[j]):
#-- if the outer contour is significantly longer than the
#-- inner contour, then it's not a pinning point but a loop
#-- in the GL (use factor of 10 difference). In that case, get
#-- the inner loop instead
if len(contours[i][:,0]) > 10*len(contours[j][:,0]):
cmat[j,i] = True
else:
cmat[i,j] = True
#-- However, note that if one outer control has more than 1 inner contour,
#-- then it's not a pinning point and it's actually just noise.
#-- In that case, ignore the inner contours. We add a new array for 'noise' points
#-- to be ignored
noise = []
#-- get indices of rows with more than 1 True column in cmat
for i in range(len(cmat)):
if np.count_nonzero(cmat[i,:]) > 1:
noise_idx, = np.nonzero(cmat[i,:])
#-- concentante to noise list
noise += list(noise_idx)
#-- turn the matrix elements back off
for j in noise_idx:
cmat[i,j] = False
#-- remove repeating elements
noise = list(set(noise))
#-- go through overlapping elements and get nonoverlapping area to convert to 'donuts'
#-- NOTE we will get the the contour corresponding to the inner ring
outer = []
inner = []
for i in range(len(pols)):
for j in range(len(pols)):
if cmat[i,j] and (i not in noise) and (j not in noise):
#-- save indices of inner and outer rings
outer.append(i)
inner.append(j)
#-- initialize centerline plot
im2 = np.zeros(im.shape, dtype=int)
#--------------------------------------------------
#-- PLOTS
#--------------------------------------------------
# plot image
ax[0,0].imshow(img[:,:,0],cmap='bwr',vmin=-4,vmax=4)
ax[0,0].set_title('Real')
ax[0,1].imshow(img[:,:,1],cmap='bwr',vmin=-4,vmax=4)
ax[0,1].set_title('Imag')
# plot training label
ax[0,2].imshow(lbl, cmap=plt.cm.gray)
ax[0,2].set_title('Label')
# plot prediction
ax[1,0].imshow(im, cmap=plt.cm.gray)
ax[1,0].set_title('Pred')
# plot contours
ax[1,1].imshow(np.zeros(im.shape),cmap=plt.cm.gray)
for n, contour in enumerate(contours):
if (n not in noise) and (n not in outer):
col = np.random.rand(3,)
ax[1,1].plot(contour[:, 1], contour[:, 0], linewidth=2, color=col)
#-- draw line through contour
im2[np.round(contour[:, 0]).astype('int'), np.round(contour[:, 1]).astype('int')] = 1
im2 = thin(ndimage.binary_fill_holes(im2))
ax[1,1].set_title('Post')
ax[1,1].get_xaxis().set_ticks([])
ax[1,1].get_yaxis().set_ticks([])
ax[1,2].imshow(im2, cmap=plt.cm.gray)
ax[1,2].set_title('Centerline')
#-- remove plot axes
for ix in range(2):
for iy in range(3):
ax[ix,iy].set_axis_off()
plt.savefig(os.path.join(out_subdir[t],f.replace('coco','post').replace('.npy','.png')),format='PNG')
plt.close()
###Output
_____no_output_____
###Markdown
Experiment with gdal vectorization (06/10/2020)
###Code
import gdal
import ogr
gdal.Polygonize?
#-- Read an image to test
infile = os.path.join(indir,'Train_predictions.dir/atrous_32init_drop0.2_customLossR727.dir/pred_gl_054_180328-180403-180403-180409_010230-021301-021301-010405_T102357_T102440_x1590_y1024_DIR01.png')
test_ind = fileList['Train'].index(infile)
f = fileList['Train'][test_ind]
im = imageio.imread(f).astype(float)/255.
#-- output file
outfile = os.path.join(out_subdir['Train'],f.replace('pred','post').replace('.png','.shp'))
utShapefile = "polygonized"
driver = ogr.GetDriverByName("ESRI Shapefile")
#-- delete exisiting file
if os.path.exists(outfile):
driver.DeleteDataSource(outfile)
outDatasource = driver.CreateDataSource(outfile)
outLayer = outDatasource.CreateLayer("polygonized", srs=None)
newField = ogr.FieldDefn('MYFLD', ogr.OFTInteger)
outLayer.CreateField(newField)
gdal.Polygonize(im,None,outLayer, 0, [], callback=None)
outDatasource.Destroy()
sourceRaster = None
###Output
_____no_output_____
###Markdown
Experiment with Potrace
###Code
import geopandas as gpd
infile = '/Users/yaramohajerani/Google Drive File Stream/My Drive/GL_Learning/Train_predictions.dir/atrous_32init_drop0.2_customLossR727.dir/pred_gl_007_180424-180430-180430-180506_021604-010708-010708-021779_T050918_T050832_x1930_y1024_DIR01.png'
test_ind = fileList['Train'].index(infile)
im = imageio.imread(fileList['Train'][test_ind]).astype(float)/255.
fig = plt.figure(1,figsize=(10,10))
plt.imshow(im, cmap=plt.cm.gray)
plt.show()
###Output
_____no_output_____
###Markdown
First make convert to an intermediary .pnm file
###Code
!convert "/Users/yaramohajerani/Google Drive File Stream/My Drive/GL_Learning/Train_predictions.dir/atrous_32init_drop0.2_customLossR727.dir/pred_gl_007_180424-180430-180430-180506_021604-010708-010708-021779_T050918_T050832_x1930_y1024_DIR01.png" "/Users/yaramohajerani/pred_gl_007_180424-180430-180430-180506_021604-010708-010708-021779_T050918_T050832_x1930_y1024_DIR01.pnm"
###Output
_____no_output_____
###Markdown
Use `potrace` to vectorize:
###Code
!potrace /Users/yaramohajerani/pred_gl_007_180424-180430-180430-180506_021604-010708-010708-021779_T050918_T050832_x1930_y1024_DIR01.pnm -b geojson -o /Users/yaramohajerani/pred_gl_007_180424-180430-180430-180506_021604-010708-010708-021779_T050918_T050832_x1930_y1024_DIR01.geojson
gdf = gpd.read_file('/Users/yaramohajerani/pred_gl_007_180424-180430-180430-180506_021604-010708-010708-021779_T050918_T050832_x1930_y1024_DIR01.geojson')
gdf
gdf.plot()
gdf['geometry'][0]
gdf['geometry'][1]
gdf['geometry'][2]
###Output
_____no_output_____
###Markdown
Experiment wiht centerline
###Code
from centerline.geometry import Centerline
infile = '/Users/yaramohajerani/Google Drive File Stream/My Drive/GL_Learning/Train_predictions.dir/atrous_32init_drop0.2_customLossR727.dir/pred_gl_007_180424-180430-180430-180506_021604-010708-010708-021779_T050918_T050832_x1930_y1024_DIR01.png'
test_ind = fileList['Train'].index(infile)
im = imageio.imread(fileList['Train'][test_ind]).astype(float)/255.
fig = plt.figure(1,figsize=(10,10))
plt.imshow(im, cmap=plt.cm.gray)
plt.show()
#-- First, we polygonize
eps = 0.3 # contour threshold
#-- close contour ends to make polygons
im[np.nonzero(im[:,0] > eps),0] = eps
im[np.nonzero(im[:,-1] > eps),-1] = eps
im[0,np.nonzero(im[0,:] > eps)] = eps
im[-1,np.nonzero(im[-1,:] > eps)] = eps
#- get contours
contours = measure.find_contours(im, eps)
#-- make contours into closed polyons to find pinning points
pols = [None]*len(contours)
for n, contour in enumerate(contours):
pols[n] = Polygon(zip(contour[:,0],contour[:,1]))
pols[1].contains(pols[2])
pols[1].difference(pols[2])
pp = pols[1].difference(pols[2])
attributes = {"id": 0, "name": "polygon", "valid": False}
cl = Centerline(pp,interpolation_distance=5, **attributes)
cl
!pip install pySkeleton
from pySkeleton import polygon
len(cl)
cl.length
cl.geom_type
from shapely.geometry import MultiLineString,LineString
l = MultiLineString(cl)
type(l)
xy = []
for i in range(len(cl)):
xy.extend(cl[i].coords.xy)
from shapely import ops
merged_line = ops.linemerge(cl)
merged_line
len(merged_line), len(cl)
#-- get longest line and plot
np.argmax([m.length for m in merged_line])
ll = merged_line[2]
ll
x,y = ll.coords.xy
len(x),len(y)
cn = [list(a) for a in zip(x,y)]
len(cn)
#-- directory setup
ddir = os.path.expanduser('~/GL_learning_data/geocoded_v1')
subdir = 'atrous_32init_drop0.2_customLossR727.dir'
FILTER = 6000
pred_dir = os.path.join(ddir,'stitched.dir',subdir)
pred_file = 'gl_069_181218-181224-181224-181230_014095-025166-025166-014270_T110614_T110655.tif'
#-- threshold for getting contours and centerlines
eps = 0.3
#-- read file
raster = rasterio.open(os.path.join(pred_dir,pred_file),'r')
im = raster.read(1)
#-- get transformation matrix
trans = raster.transform
bb = raster.bounds
bb
fig = plt.figure(1,figsize=(10,10))
plt.imshow(im, cmap=plt.cm.gray)
plt.show()
#-- get contours of prediction
#-- close contour ends to make polygons
im[np.nonzero(im[:,0] > eps),0] = eps
im[np.nonzero(im[:,-1] > eps),-1] = eps
im[0,np.nonzero(im[0,:] > eps)] = eps
im[-1,np.nonzero(im[-1,:] > eps)] = eps
contours = measure.find_contours(im, eps)
#-- make contours into closed polyons to find pinning points
#-- also apply noise filter and append to noise list
x = {}
y = {}
noise = []
pols = [None]*len(contours)
pol_type = [None]*len(contours)
for n,contour in enumerate(contours):
#-- convert to coordinates
x[n],y[n] = rasterio.transform.xy(trans, contour[:,0], contour[:,1])
pols[n] = Polygon(zip(x[n],y[n]))
len(pols)
ignore_list = []
for i in range(len(pols)):
for j in range(len(pols)):
if (i != j) and pols[i].contains(pols[j]):
pols[i] = pols[i].difference(pols[j])
ignore_list.append(j)
ignore_list
%matplotlib widget
fig = plt.figure(1, figsize=(10,10))
ax = fig.add_subplot(111)
for i,p in enumerate(pols):
ring_patch = PolygonPatch(p)
ax.add_patch(ring_patch)
ax.set_xlim([bb[0],bb[2]])
ax.set_ylim([bb[1],bb[3]])
plt.show()
pols[5]
idx = 5
dis = pols[idx].length/100
#-- get centerlines
attributes = {"id": pols[idx], "name": "polygon", "valid": True}
cl = Centerline(pols[idx],interpolation_distance=dis, **attributes)
cl
cl.geom_type
#-- get longest line and plot
merged_lines = linemerge(cl)
line_ind = np.argmax([m.length for m in merged_lines])
xc,yc = merged_lines[line_ind].coords.xy
len(cl),len(merged_lines)
merged_lines
merged_lines[line_ind].length
fig = plt.figure(1, figsize=(10,10))
ax = fig.add_subplot(111)
ring_patch = PolygonPatch(pols[idx],alpha=0.5)
ax.add_patch(ring_patch)
plt.plot(yc,xc,'r-',linewidth=1.5)
ext_x,ext_y = pols[idx].exterior.coords.xy
ax.set_xlim([np.min(ext_x),np.max(np.max(ext_x))])
ax.set_ylim([np.min(ext_y),np.max(np.max(ext_y))])
plt.show()
###Output
_____no_output_____ |
Object tracking and Localization/representing_state_&_Motion/Matrix_Multiplication.ipynb | ###Markdown
Predict stateHere is the current implementation of the `predict_state` function. It takes in a state (a Python list), and then separates those into position and velocity to calculate a new, predicted state. It uses a constant velocity motion model.**In this exercise, we'll be improving this function, and using matrix multiplication to efficiently calculate the predicted state!**
###Code
# The current predict state function
# Predicts the next state based on a motion model
def predict_state(state, dt):
# Assumes a valid state had been passed in
x = state[0]
velocity = state[1]
# Assumes a constant velocity model
new_x = x + velocity*dt
# Create and return the new, predicted state
predicted_state = [new_x, velocity]
return predicted_state
###Output
_____no_output_____
###Markdown
Matrix operationsYou've been given a matrix class that can create new matrices and performs one operation: multiplication. In our directory this is called `matrix.py`.Similar to the Car class, we can use this to initialize matrix objects.
###Code
# import the matrix file
import matrix
# Initialize a state vector
initial_position = 0 # meters
velocity = 50 # m/s
# Notice the syntax for creating a state column vector ([ [x], [v] ])
# Commas separate these items into rows and brackets into columns
initial_state = matrix.Matrix([ [initial_position],
[velocity] ])
###Output
_____no_output_____
###Markdown
Transformation matrixNext, define the state transformation matrix and print it out!
###Code
# Define the state transformation matrix
dt = 1
tx_matrix = matrix.Matrix([ [1, dt],
[0, 1] ])
print(tx_matrix)
###Output
[[1 1 ]
[0 1 ]
]
###Markdown
TODO: Modify the predict state function to use matrix multiplicationNow that you know how to create matrices, modify the `predict_state` function to work with them!Note: you can multiply a matrix A by a matrix B by writing `A*B` and it will return a new matrix.
###Code
# The current predict state function
def predict_state_mtx(state, dt):
## TODO: Assume that the state passed in is a Matrix object
## Using a constant velocity model and a transformation matrix
## Create and return the new, predicted state!
# Assumes a valid state had been passed in
tx_matrix = matrix.Matrix([ [1, dt],
[0, 1] ])
predicted_state = tx_matrix * state
return predicted_state
###Output
_____no_output_____
###Markdown
Test cellHere is an initial state vector and dt to test your function with!
###Code
# initial state variables
initial_position = 10 # meters
velocity = 30 # m/s
# Initial state vector
initial_state = matrix.Matrix([ [initial_position],
[velocity] ])
print('The initial state is: ' + str(initial_state))
# after 2 seconds make a prediction using the new function
state_est1 = predict_state_mtx(initial_state, 2)
print('State after 2 seconds is: ' + str(state_est1))
# Make more predictions!
# after 3 more
state_est2 = predict_state_mtx(state_est1, 3)
print('State after 3 more seconds is: ' + str(state_est2))
# after 3 more
state_est3 = predict_state_mtx(state_est2, 3)
print('Final state after 3 more seconds is: ' + str(state_est3))
###Output
State after 3 more seconds is: [[160.0 ]
[30.0 ]
]
Final state after 3 more seconds is: [[250.0 ]
[30.0 ]
]
|
app/notebooks/labeled_identities/hosts/richard_lui.ipynb | ###Markdown
Table of Contents1 Name2 Search2.1 Load Cached Results2.2 Build Model From Google Images3 Analysis3.1 Gender cross validation3.2 Face Sizes3.3 Screen Time Across All Shows3.4 Appearances on a Single Show4 Persist to Cloud4.1 Save Model to Google Cloud Storage4.2 Save Labels to DB4.2.1 Commit the person and labeler4.2.2 Commit the FaceIdentity labels
###Code
from esper.prelude import *
from esper.identity import *
from esper import embed_google_images
###Output
_____no_output_____
###Markdown
Name Please add the person's name and their expected gender below (Male/Female).
###Code
name = 'Richard Lui'
gender = 'Male'
###Output
_____no_output_____
###Markdown
Search Load Cached Results Reads cached identity model from local disk. Run this if the person has been labelled before and you only wish to regenerate the graphs. Otherwise, if you have never created a model for this person, please see the next section.
###Code
assert name != ''
results = FaceIdentityModel.load(name=name)
imshow(tile_imgs([cv2.resize(x[1][0], (200, 200)) for x in results.model_params['images']], cols=10))
plt.show()
plot_precision_and_cdf(results)
###Output
_____no_output_____
###Markdown
Build Model From Google Images Run this section if you do not have a cached model and precision curve estimates. This section will grab images using Google Image Search and score each of the faces in the dataset. We will interactively build the precision vs score curve.It is important that the images that you select are accurate. If you make a mistake, rerun the cell below.
###Code
assert name != ''
# Grab face images from Google
img_dir = embed_google_images.fetch_images(name)
# If the images returned are not satisfactory, rerun the above with extra params:
# query_extras='' # additional keywords to add to search
# force=True # ignore cached images
face_imgs = load_and_select_faces_from_images(img_dir)
face_embs = embed_google_images.embed_images(face_imgs)
assert(len(face_embs) == len(face_imgs))
reference_imgs = tile_imgs([cv2.resize(x[0], (200, 200)) for x in face_imgs if x], cols=10)
def show_reference_imgs():
print('User selected reference images for {}.'.format(name))
imshow(reference_imgs)
plt.show()
show_reference_imgs()
# Score all of the faces in the dataset (this can take a minute)
face_ids_by_bucket, face_ids_to_score = face_search_by_embeddings(face_embs)
precision_model = PrecisionModel(face_ids_by_bucket)
###Output
_____no_output_____
###Markdown
Now we will validate which of the images in the dataset are of the target identity.__Hover over with mouse and press S to select a face. Press F to expand the frame.__
###Code
show_reference_imgs()
print(('Mark all images that ARE NOT {}. Thumbnails are ordered by DESCENDING distance '
'to your selected images. (The first page is more likely to have non "{}" images.) '
'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON '
'BEFORE PROCEEDING.)').format(
name, name, precision_model.get_lower_count()))
lower_widget = precision_model.get_lower_widget()
lower_widget
show_reference_imgs()
print(('Mark all images that ARE {}. Thumbnails are ordered by ASCENDING distance '
'to your selected images. (The first page is more likely to have "{}" images.) '
'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON '
'BEFORE PROCEEDING.)').format(
name, name, precision_model.get_lower_count()))
upper_widget = precision_model.get_upper_widget()
upper_widget
###Output
_____no_output_____
###Markdown
Run the following cell after labelling to compute the precision curve. Do not forget to re-enable jupyter shortcuts.
###Code
# Compute the precision from the selections
lower_precision = precision_model.compute_precision_for_lower_buckets(lower_widget.selected)
upper_precision = precision_model.compute_precision_for_upper_buckets(upper_widget.selected)
precision_by_bucket = {**lower_precision, **upper_precision}
results = FaceIdentityModel(
name=name,
face_ids_by_bucket=face_ids_by_bucket,
face_ids_to_score=face_ids_to_score,
precision_by_bucket=precision_by_bucket,
model_params={
'images': list(zip(face_embs, face_imgs))
}
)
plot_precision_and_cdf(results)
###Output
_____no_output_____
###Markdown
The next cell persists the model locally.
###Code
results.save()
###Output
_____no_output_____
###Markdown
Analysis Gender cross validationSituations where the identity model disagrees with the gender classifier may be cause for alarm. We would like to check that instances of the person have the expected gender as a sanity check. This section shows the breakdown of the identity instances and their labels from the gender classifier.
###Code
gender_breakdown = compute_gender_breakdown(results)
print('Expected counts by gender:')
for k, v in gender_breakdown.items():
print(' {} : {}'.format(k, int(v)))
print()
print('Percentage by gender:')
denominator = sum(v for v in gender_breakdown.values())
for k, v in gender_breakdown.items():
print(' {} : {:0.1f}%'.format(k, 100 * v / denominator))
print()
###Output
_____no_output_____
###Markdown
Situations where the identity detector returns high confidence, but where the gender is not the expected gender indicate either an error on the part of the identity detector or the gender detector. The following visualization shows randomly sampled images, where the identity detector returns high confidence, grouped by the gender label.
###Code
high_probability_threshold = 0.8
show_gender_examples(results, high_probability_threshold)
###Output
_____no_output_____
###Markdown
Face SizesFaces shown on-screen vary in size. For a person such as a host, they may be shown in a full body shot or as a face in a box. Faces in the background or those part of side graphics might be smaller than the rest. When calculuating screentime for a person, we would like to know whether the results represent the time the person was featured as opposed to merely in the background or as a tiny thumbnail in some graphic.The next cell, plots the distribution of face sizes. Some possible anomalies include there only being very small faces or large faces.
###Code
plot_histogram_of_face_sizes(results)
###Output
_____no_output_____
###Markdown
The histogram above shows the distribution of face sizes, but not how those sizes occur in the dataset. For instance, one might ask why some faces are so large or whhether the small faces are actually errors. The following cell groups example faces, which are of the target identity with probability, by their sizes in terms of screen area.
###Code
high_probability_threshold = 0.8
show_faces_by_size(results, high_probability_threshold, n=10)
###Output
_____no_output_____
###Markdown
Screen Time Across All ShowsOne question that we might ask about a person is whether they received a significantly different amount of screentime on different shows. The following section visualizes the amount of screentime by show in total minutes and also in proportion of the show's total time. For a celebrity or political figure such as Donald Trump, we would expect significant screentime on many shows. For a show host such as Wolf Blitzer, we expect that the screentime be high for shows hosted by Wolf Blitzer.
###Code
screen_time_by_show = get_screen_time_by_show(results)
plot_screen_time_by_show(name, screen_time_by_show)
###Output
_____no_output_____
###Markdown
Appearances on a Single ShowFor people such as hosts, we would like to examine in greater detail the screen time allotted for a single show. First, fill in a show below.
###Code
show_name = 'MSNBC Live'
# Compute the screen time for each video of the show
screen_time_by_video_id = compute_screen_time_by_video(results, show_name)
###Output
_____no_output_____
###Markdown
One question we might ask about a host is "how long they are show on screen" for an episode. Likewise, we might also ask for how many episodes is the host not present due to being on vacation or on assignment elsewhere. The following cell plots a histogram of the distribution of the length of the person's appearances in videos of the chosen show.
###Code
plot_histogram_of_screen_times_by_video(name, show_name, screen_time_by_video_id)
###Output
_____no_output_____
###Markdown
For a host, we expect screentime over time to be consistent as long as the person remains a host. For figures such as Hilary Clinton, we expect the screentime to track events in the real world such as the lead-up to 2016 election and then to drop afterwards. The following cell plots a time series of the person's screentime over time. Each dot is a video of the chosen show. Red Xs are videos for which the face detector did not run.
###Code
plot_screentime_over_time(name, show_name, screen_time_by_video_id)
###Output
_____no_output_____
###Markdown
We hypothesized that a host is more likely to appear at the beginning of a video and then also appear throughout the video. The following plot visualizes the distibution of shot beginning times for videos of the show.
###Code
plot_distribution_of_appearance_times_by_video(results, show_name)
###Output
_____no_output_____
###Markdown
In the section 3.3, we see that some shows may have much larger variance in the screen time estimates than others. This may be because a host or frequent guest appears similar to the target identity. Alternatively, the images of the identity may be consistently low quality, leading to lower scores. The next cell plots a histogram of the probabilites for for faces in a show.
###Code
plot_distribution_of_identity_probabilities(results, show_name)
###Output
_____no_output_____
###Markdown
Persist to CloudThe remaining code in this notebook uploads the built identity model to Google Cloud Storage and adds the FaceIdentity labels to the database. Save Model to Google Cloud Storage
###Code
gcs_model_path = results.save_to_gcs()
###Output
_____no_output_____
###Markdown
To ensure that the model stored to Google Cloud is valid, we load it and print the precision and cdf curve below.
###Code
gcs_results = FaceIdentityModel.load_from_gcs(name=name)
imshow(tile_imgs([cv2.resize(x[1][0], (200, 200)) for x in gcs_results.model_params['images']], cols=10))
plt.show()
plot_precision_and_cdf(gcs_results)
###Output
_____no_output_____
###Markdown
Save Labels to DBIf you are satisfied with the model, we can commit the labels to the database.
###Code
from django.core.exceptions import ObjectDoesNotExist
def standardize_name(name):
return name.lower()
person_type = ThingType.objects.get(name='person')
try:
person = Thing.objects.get(name=standardize_name(name), type=person_type)
print('Found person:', person.name)
except ObjectDoesNotExist:
person = Thing(name=standardize_name(name), type=person_type)
print('Creating person:', person.name)
labeler = Labeler(name='face-identity:{}'.format(person.name), data_path=gcs_model_path)
###Output
_____no_output_____
###Markdown
Commit the person and labelerThe labeler and person have been created but not set saved to the database. If a person was created, please make sure that the name is correct before saving.
###Code
person.save()
labeler.save()
###Output
_____no_output_____
###Markdown
Commit the FaceIdentity labelsNow, we are ready to add the labels to the database. We will create a FaceIdentity for each face whose probability exceeds the minimum threshold.
###Code
commit_face_identities_to_db(results, person, labeler, min_threshold=0.001)
print('Committed {} labels to the db'.format(FaceIdentity.objects.filter(labeler=labeler).count()))
###Output
_____no_output_____ |
iguanas/rule_selection/examples/advanced_bayes_search_cv_example.ipynb | ###Markdown
Advanced Bayes Search CV Example This is a more advanced example of how the `BayesSearchCV` class can be applied - it's recommended that you first read through the simpler `bayes_search_cv_example`. The `BayesSearchCV` class is used to search for the set of hyperparameters that produce the best decision engine performance for a given Iguanas Pipeline, whilst also reducing the likelihood of overfitting.The process is as follows:* Generate k-fold stratified cross validation datasets. * For each of the training and validation datasets: * Fit the pipeline on the training set using a set of parameters chosen by the Bayesian Optimiser from a given set of ranges. * Apply the pipeline to the validation set to return a prediction. * Use the provided `scorer` to calculate the score of the prediction.* Return the parameter set which generated the highest mean overall score across the validation datasets.In this example, we'll consider the following more advanced workflow (compared to the standard `bayes_search_cv_example` notebook), which considers the generation of a Rules-Based System for a credit card fraud transaction use case: Here, we have a fraud detection use case, and we're aiming to create two distinct rule sets - one for flagging fraudulent behaviour; one for flagging good behaviour. Each of these rule sets will be comprised of a generated rule set and an existing rule set. We'll optimise and filter these two rule sets separately, then combine and feed them into the decision engine optimiser. **Note:** we optimise the generated rules as they'll be created using the `RuleGeneratorDT` class, which generates rules from the branches of decision trees - these split based on gini or entropy - so we can further optimise them for a specific metric. **The decision engine will have the following constraint:** for a given transaction, if any approve rules fire it will be approved; else, if any reject rules fire it will be rejected; else, it will be approved. We'll use the `BayesSearchCV` class to optimise the hyperparameters of the steps in this workflow, **ensuring that we maximise the revenue for our decision engine.** --- Import packages
###Code
from iguanas.rule_generation import RuleGeneratorDT
from iguanas.rule_selection import SimpleFilter, CorrelatedFilter, GreedyFilter, BayesSearchCV
from iguanas.metrics import FScore, Precision, Revenue, JaccardSimilarity
from iguanas.rbs import RBSOptimiser, RBSPipeline
from iguanas.correlation_reduction import AgglomerativeClusteringReducer
from iguanas.pipeline import LinearPipeline, ParallelPipeline
from iguanas.pipeline.class_accessor import ClassAccessor
from iguanas.space import UniformFloat, UniformInteger, Choice
from iguanas.rules import Rules
from iguanas.rule_optimisation import BayesianOptimiser
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from category_encoders.one_hot import OneHotEncoder
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
###Output
_____no_output_____
###Markdown
Read in data Let's read in the [credit card fraud dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud) from Kaggle.**Note:** this data has been altered to include some null values in the `V1` column. This is to simulate unprocessed data (the dataset on Kaggle has been processed using PCA, so there are no null values). It has also been randomly sampled to 10% of its original number of records, to reduce the file size.
###Code
target_col = 'Class'
time_col = 'Time'
amt_col = 'Amount'
# Ready in data
df = pd.read_csv('dummy_data/creditcard.csv')
# Sort data by time ascending
df.sort_values(time_col, ascending=True)
# Create X and y dataframes
X = df.drop([target_col, time_col], axis=1)
y = df[target_col]
X_train_raw, X_test_raw, y_train, y_test = train_test_split(
X,
y,
test_size=0.33,
random_state=42
)
###Output
_____no_output_____
###Markdown
To calculate the **Revenue**, we need the monetary amount of each transaction - we'll use these later:
###Code
amts_train = X_train_raw[amt_col]
amts_test = X_test_raw[amt_col]
###Output
_____no_output_____
###Markdown
Process data Let's impute the null values with the mean:
###Code
imputer = SimpleImputer(strategy='mean')
X_train = pd.DataFrame(
imputer.fit_transform(X_train_raw),
columns=X_train_raw.columns,
index=X_train_raw.index
)
X_test = pd.DataFrame(
imputer.transform(X_test_raw),
columns=X_test_raw.columns,
index=X_test_raw.index
)
# Check nulls have been imputed
X_train.isna().sum().sum(), X_test.isna().sum().sum()
###Output
_____no_output_____
###Markdown
Existing rules Let's also assume we have the following existing rules, stored in the standard Iguanas string format:
###Code
fraud_rule_strings = {
"ExistingReject1": "((X['V1']<0)|(X['V1'].isna()))&(X['V3']<1)",
"ExistingReject2": "(X['V2']>3)",
}
good_rule_strings = {
"ExistingApprove1": "(X['V1']>0)&(X['V3']>1)",
"ExistingApprove2": "(X['V2']<3)",
"ExistingApprove3": "(X['V4']<3)"
}
###Output
_____no_output_____
###Markdown
We can create a `Rules` class for each of these:
###Code
fraud_rules = Rules(rule_strings=fraud_rule_strings)
good_rules = Rules(rule_strings=good_rule_strings)
###Output
_____no_output_____
###Markdown
Then convert them to the standard Iguanas lambda expression format (we'll need this for the optimisation step):
###Code
fraud_rule_lambdas = fraud_rules.as_rule_lambdas(
as_numpy=False,
with_kwargs=True
)
good_rule_lambdas = good_rules.as_rule_lambdas(
as_numpy=False,
with_kwargs=True
)
###Output
_____no_output_____
###Markdown
---- Set up pipeline Before we can apply the `BayesSearchCV` class, we need to set up our pipeline. To create the workflow shown at the beginning of the notebook, we must use a combination of `LinearPipeline` and `ParallelPipeline` classes as shown below:  Let's begin building the **Fraud *LinearPipeline***. Fraud *LinearPipeline* Let's first instantiate the classes that we'll use in the pipeline:
###Code
# F1 Score
f1 = FScore(beta=1)
# Precision
p = Precision()
# Rule generation
fraud_gen = RuleGeneratorDT(
metric=f1.fit,
n_total_conditions=2,
tree_ensemble=RandomForestClassifier(
n_estimators=10,
random_state=0
),
target_feat_corr_types='Infer',
rule_name_prefix='Reject' # Set this so generated reject rules distinguishable from approve rules
)
# Rule optimisation (for generated rules)
fraud_gen_opt = BayesianOptimiser(
rule_lambdas=ClassAccessor(
class_tag='fraud_gen',
class_attribute='rule_lambdas'
),
lambda_kwargs=ClassAccessor(
class_tag='fraud_gen',
class_attribute='lambda_kwargs'
),
metric=f1.fit,
n_iter=10
)
# Rule optimisation (for existing rules)
fraud_opt = BayesianOptimiser(
rule_lambdas=fraud_rule_lambdas,
lambda_kwargs=fraud_rules.lambda_kwargs,
metric=f1.fit,
n_iter=10
)
# Rule filter (performance-based)
fraud_sf = SimpleFilter(
threshold=0.1,
operator='>=',
metric=f1.fit
)
# Rule filter (correlation-based)
js = JaccardSimilarity()
fraud_cf = CorrelatedFilter(
correlation_reduction_class=AgglomerativeClusteringReducer(
threshold=0.9,
strategy='top_down',
similarity_function=js.fit,
metric=f1.fit
),
rules=ClassAccessor(
class_tag='fraud_gen',
class_attribute='rules'
)
)
###Output
_____no_output_____
###Markdown
Now we can create our **Fraud Rule Generation *LinearPipeline***. Note that we pass the tag for the optimisation of the generated rules to the `use_init_data` parameter, so that the feature set is passed to the `BayesianOptimiser` class, rather than the output from the `RuleGeneratorDT`:
###Code
fraud_gen_lp = LinearPipeline(
steps = [
('fraud_gen', fraud_gen),
('fraud_gen_opt', fraud_gen_opt),
],
use_init_data=['fraud_gen_opt']
)
###Output
_____no_output_____
###Markdown
And then our **Fraud *ParallelPipeline*** (noting that one of the steps in this pipeline is the **Fraud Rule Generation *LinearPipeline*** created above):
###Code
fraud_gen_lp = ParallelPipeline(
steps = [
('fraud_gen_lp', fraud_gen_lp),
('fraud_opt', fraud_opt),
]
)
###Output
_____no_output_____
###Markdown
And then finally, our **Fraud *LinearPipeline***:
###Code
fraud_lp = LinearPipeline(
steps = [
('fraud_gen_lp', fraud_gen_lp),
('fraud_sf', fraud_sf),
('fraud_cf', fraud_cf)
]
)
###Output
_____no_output_____
###Markdown
Now we can do the same for the **Good *LinearPipeline***: Good *LinearPipeline* Let's first instantiate the classes that we'll use in the pipeline:
###Code
# Rule generation
good_gen = RuleGeneratorDT(
metric=f1.fit,
n_total_conditions=2,
tree_ensemble=RandomForestClassifier(
n_estimators=10,
random_state=0
),
target_feat_corr_types='Infer',
rule_name_prefix='Approve' # Set this so generated reject rules distinguishable from approve rules
)
# Rule optimisation (for generated rules)
good_gen_opt = BayesianOptimiser(
rule_lambdas=ClassAccessor(
class_tag='good_gen',
class_attribute='rule_lambdas'
),
lambda_kwargs=ClassAccessor(
class_tag='good_gen',
class_attribute='lambda_kwargs'
),
metric=f1.fit,
n_iter=10
)
# Rule optimisation (for existing rules)
good_opt = BayesianOptimiser(
rule_lambdas=good_rule_lambdas,
lambda_kwargs=good_rules.lambda_kwargs,
metric=f1.fit,
n_iter=10
)
# Rule filter (performance-based)
good_sf = SimpleFilter(
threshold=0.1,
operator='>=',
metric=f1.fit
)
# Rule filter (correlation-based)
js = JaccardSimilarity()
good_cf = CorrelatedFilter(
correlation_reduction_class=AgglomerativeClusteringReducer(
threshold=0.9,
strategy='top_down',
similarity_function=js.fit,
metric=f1.fit
),
rules=ClassAccessor(
class_tag='good_gen',
class_attribute='rules'
)
)
###Output
_____no_output_____
###Markdown
Now we can create our **Good Rule Generation *LinearPipeline***. Note that we pass the tag for the optimisation of the generated rules to the `use_init_data` parameter, so that the feature set is passed to the `BayesianOptimiser` class, rather than the output from the `RuleGeneratorDT`:
###Code
good_gen_lp = LinearPipeline(
steps = [
('good_gen', good_gen),
('good_gen_opt', good_gen_opt),
],
use_init_data=['good_gen_opt']
)
###Output
_____no_output_____
###Markdown
And then our **Good *ParallelPipeline*** (noting that one of the steps in this pipeline is the **Good Rule Generation *LinearPipeline*** created above):
###Code
good_gen_lp = ParallelPipeline(
steps = [
('good_gen_lp', good_gen_lp),
('good_opt', good_opt),
]
)
###Output
_____no_output_____
###Markdown
And then finally, our **Good *LinearPipeline***:
###Code
good_lp = LinearPipeline(
steps = [
('good_gen_lp', good_gen_lp),
('good_sf', good_sf),
('good_cf', good_cf)
]
)
###Output
_____no_output_____
###Markdown
Now we can move on to constructing the **Overall Pipelines:** Overall Pipelines First, we'll construct our **Overall *ParallelPipeline*** using the **Fraud *LinearPipeline*** and **Good *LinearPipeline***:
###Code
overall_pp = ParallelPipeline(
steps = [
('fraud_lp', fraud_lp),
('good_lp', good_lp)
]
)
###Output
_____no_output_____
###Markdown
Now we can instantiate the decision engine optimiser. Since we have a constraint on the decision engine (if any approve rules fire, approve the transaction; else if any reject rules fire, reject the transaction; else approve the transaction), we pass the rules remaining after the filtering stages to the relevant elements in the `config` parameter of the `RBSPipeline` class, using the `ClassAccessor` class:
###Code
# Decision engine optimisation metric
opt_metric = Revenue(
y_type='Fraud',
chargeback_multiplier=3
)
# Decision engine (to be optimised)
rbs_pipeline = RBSPipeline(
config=[
[
0, ClassAccessor( # If any approve rules fire, approve
class_tag='good_cf',
class_attribute='rules_to_keep'
),
],
[
1, ClassAccessor( # Else if any reject rules fire, reject
class_tag='fraud_cf',
class_attribute='rules_to_keep'
)
],
],
final_decision=0 # Else approve
)
# Decision engine optimiser
rbs_optimiser = RBSOptimiser(
pipeline=rbs_pipeline,
metric=opt_metric.fit,
rules=ClassAccessor(
class_tag='overall_pp',
class_attribute='rules'
),
n_iter=10
)
###Output
_____no_output_____
###Markdown
Finally, we can instantiate our **Overall *LinearPipeline***:
###Code
overall_lp = LinearPipeline(
steps=[
('overall_pp', overall_pp),
('rbs_optimiser', rbs_optimiser)
]
)
###Output
_____no_output_____
###Markdown
Define the search space Now we need to define the search space for each of the relevant parameters of our pipeline. **Note:** this example does not search across all hyperparameters - you should define your own search spaces based on your use case.To do this, we create a dictionary, where each key corresponds to the tag used for the relevant pipeline step. Each value should be a dictionary of the parameters (keys) and their search spaces (values). Search spaces should be defined using the classes in the `iguanas.space` module:
###Code
# Define additional FScores
f0dot5 = FScore(beta=0.5)
f0dot25 = FScore(beta=0.25)
search_spaces = {
'fraud_gen': {
'n_total_conditions': UniformInteger(2, 7),
},
'fraud_gen_opt': {
'metric': Choice([f0dot25.fit, f0dot5.fit, f1.fit]),
},
'fraud_sf': {
'threshold': UniformFloat(0, 1),
},
'fraud_cf': {
'correlation_reduction_class': Choice(
[
AgglomerativeClusteringReducer(
threshold=0.9,
strategy='top_down',
similarity_function=js.fit,
metric=f1.fit
),
AgglomerativeClusteringReducer(
threshold=0.95,
strategy='top_down',
similarity_function=js.fit,
metric=f1.fit
)
]
)
},
'good_gen': {
'n_total_conditions': UniformInteger(2, 7),
},
'good_gen_opt': {
'metric': Choice([f0dot25.fit, f0dot5.fit, f1.fit]),
},
'good_sf': {
'threshold': UniformFloat(0, 1),
},
'good_cf': {
'correlation_reduction_class': Choice(
[
AgglomerativeClusteringReducer(
threshold=0.9,
strategy='top_down',
similarity_function=js.fit,
metric=f1.fit
),
AgglomerativeClusteringReducer(
threshold=0.95,
strategy='top_down',
similarity_function=js.fit,
metric=f1.fit
)
]
)
}
}
###Output
_____no_output_____
###Markdown
Optimise the pipeline hyperparameters Now that we have our pipeline and search spaces defined, we can instantiate the `BayesSearchCV` class. We'll split our data into 3 cross-validation datasets and try 10 different parameter sets.**Note:** since we're using the `Revenue` as the scoring metric for the `BayesSearchCV` class, we need to set the `sample_weight_in_val` parameter to `True`. This ensures that the `sample_weight` passed to the final step in the pipeline is used when applying the `metric` function to the prediction of each validation set (for `Revenue`, the `sample_weight` corresponds to the monetary amount of each transaction, which is required).
###Code
bs = BayesSearchCV(
pipeline=overall_lp,
search_spaces=search_spaces,
metric=opt_metric.fit, # Use the same metric as the RBSOptimiser
cv=3,
n_iter=10,
num_cores=3,
error_score=0,
verbose=1,
sample_weight_in_val=True # Set to True
)
###Output
_____no_output_____
###Markdown
Finally, we can run the `fit` method to optimise the hyperparameters of the pipeline. **Note the following:** * The existing rules contain conditions that rely on unprocessed data (in this case, there are conditions that check for nulls). So for the rule optimisation steps, we must use the unprocessed training data `X_train_raw`; for the rule generation steps, we must use the processed training data `X_train`.* Since we're generating and optimising rules that flag both positive and negative cases (i.e. reject and approve rules in this example), we need to specify what the target is in each case. For the reject rules, we can just use `y_train`, however for the approve rules, we need to flip `y_train` (so that the rule generator and rule optimisers target the negative cases).* We need the `amts_train` to be passed to the `sample_weight` parameter of the `RBSOptimiser`, as we're optimising the decision engine for the `Revenue`.
###Code
bs.fit(
X={
'fraud_gen_lp': X_train, # Use processed features for rule generation
'fraud_opt': X_train_raw, # Use raw features for optimising existing rules
'good_gen_lp': X_train, # Use processed features for rule generation
'good_opt': X_train_raw # Use raw features for optimising existing rules
},
y={
'fraud_lp': y_train, # Use target for Fraud LinearPipeline
'good_lp': 1-y_train, # Flip target for Good LinearPipeline
'rbs_optimiser': y_train # Use target for RBSOptimiser
},
sample_weight={
'fraud_lp': None, # No sample_weight for Fraud LinearPipeline
'good_lp': None, # No sample_weight for Good LinearPipeline
'rbs_optimiser': amts_train # sample_weight for RBSOptimiser
}
)
###Output
--- Optimising pipeline parameters ---
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [01:04<00:00, 6.43s/trial, best loss: -560248.5233333333]
--- Refitting on entire dataset with best pipeline ---
###Markdown
Outputs The `fit` method doesn't return anything. See the `Attributes` section in the class docstring for a description of each attribute generated:
###Code
bs.best_score
bs.best_params
bs.best_index
bs.cv_results.head()
###Output
_____no_output_____
###Markdown
To see the final optimised decision engine configuration and rule set, we first return the parameters of the trained pipeline (stored in the attribute `pipeline_`):
###Code
pipeline_params = bs.pipeline_.get_params()
###Output
_____no_output_____
###Markdown
Then, to see the final optimised decision engine configuration, we filter to the `config` parameter of the `rbs_optimiser` step:
###Code
final_config = pipeline_params['rbs_optimiser']['config']
final_config
###Output
_____no_output_____
###Markdown
This shows us which rules should be used for the approval step (decision `0`) and which rules should be used for the rejection step (decision `1`). To see the logic of our final set of rules, we filter to the `rules` parameter of the `rbs_optimiser` step:
###Code
final_rules = bs.pipeline_.get_params()['rbs_optimiser']['rules']
###Output
_____no_output_____
###Markdown
Then extract the `rule_strings` attribute:
###Code
final_rules.rule_strings
###Output
_____no_output_____
###Markdown
Apply the optimised pipeline We can apply our optimised pipeline to a new data set and make a prediction using the `predict` method:
###Code
y_pred_test = bs.predict(X_test)
###Output
_____no_output_____
###Markdown
Outputs The `predict` method returns the prediction generated by class in the final step of the pipeline - in this case, the `RBSOptimiser`:
###Code
y_pred_test
###Output
_____no_output_____
###Markdown
We can now calculate the **Revenue** of our optimised pipeline using the test data:
###Code
rev_opt = opt_metric.fit(
y_preds=y_pred_test,
y_true=y_test,
sample_weight=amts_test
)
###Output
_____no_output_____
###Markdown
Comparing this to our original, unoptimised pipeline:
###Code
overall_lp.fit(
X={
'fraud_gen_lp': X_train,
'fraud_opt': X_train_raw,
'good_gen_lp': X_train,
'good_opt': X_train_raw
},
y={
'fraud_lp': y_train,
'good_lp': 1-y_train,
'rbs_optimiser': y_train
},
sample_weight={
'fraud_lp': None,
'good_lp': None,
'rbs_optimiser': y_train
}
)
y_pred_test_init = overall_lp.predict(X_test)
rev_init = opt_metric.fit(
y_preds=y_pred_test_init,
y_true=y_test,
sample_weight=amts_test
)
print(f'Revenue of original, unoptimised pipeline: ${round(rev_init)}')
print(f'Revenue of optimised pipeline: ${round(rev_opt)}')
print(f'Absolute improvement in Revenue: ${round(rev_opt-rev_init)}')
print(f'Percentage improvement in Revenue: {round(100*(rev_opt-rev_init)/rev_init, 2)}%')
###Output
Revenue of original, unoptimised pipeline: $775698
Revenue of optimised pipeline: $857669
Absolute improvement in Revenue: $81972
Percentage improvement in Revenue: 10.57%
###Markdown
Advanced Bayes Search CV Example This is a more advanced example of how the `BayesSearchCV` class can be applied - it's recommended that you first read through the simpler `bayes_search_cv_example`. The `BayesSearchCV` class is used to search for the set of hyperparameters that produce the best decision engine performance for a given Iguanas Pipeline, whilst also reducing the likelihood of overfitting.The process is as follows:* Generate k-fold stratified cross validation datasets. * For each of the training and validation datasets: * Fit the pipeline on the training set using a set of parameters chosen by the Bayesian Optimiser from a given set of ranges. * Apply the pipeline to the validation set to return a prediction. * Use the provided `scorer` to calculate the score of the prediction.* Return the parameter set which generated the highest mean overall score across the validation datasets.In this example, we'll consider the following more advanced workflow (compared to the standard `bayes_search_cv_example` notebook), which considers the generation of a Rules-Based System for a credit card fraud transaction use case: Here, we have a fraud detection use case, and we're aiming to create two distinct rule sets - one for flagging fraudulent behaviour (which we'll refer to as our **Reject** rule set); one for flagging good behaviour (which we'll refer to as our **Approve** rule set). Each of these rule sets will be comprised of a generated rule set and an existing rule set. We'll optimise and filter these two rule sets separately, then combine and feed them into the decision engine optimiser. **Note:** we optimise the generated rules as they'll be created using the `RuleGeneratorDT` class, which generates rules from the branches of decision trees - these split based on gini or entropy - so we can further optimise them for a specific metric. **The decision engine will have the following constraint:** for a given transaction, if any approve rules fire it will be approved; else, if any reject rules fire it will be rejected; else, it will be approved. We'll use the `BayesSearchCV` class to optimise the hyperparameters of the steps in this workflow, **ensuring that we maximise the revenue for our decision engine.** --- Import packages
###Code
from iguanas.rule_generation import RuleGeneratorDT
from iguanas.rule_selection import SimpleFilter, CorrelatedFilter, GreedyFilter, BayesSearchCV
from iguanas.metrics import FScore, Precision, Revenue, JaccardSimilarity
from iguanas.rbs import RBSOptimiser, RBSPipeline
from iguanas.correlation_reduction import AgglomerativeClusteringReducer
from iguanas.pipeline import LinearPipeline, ParallelPipeline
from iguanas.pipeline.class_accessor import ClassAccessor
from iguanas.space import UniformFloat, UniformInteger, Choice
from iguanas.rules import Rules
from iguanas.rule_optimisation import BayesianOptimiser
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from category_encoders.one_hot import OneHotEncoder
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
###Output
_____no_output_____
###Markdown
Read in data Let's read in the [credit card fraud dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud) from Kaggle.**Note:** this data has been altered to include some null values in the `V1` column. This is to simulate unprocessed data (the dataset on Kaggle has been processed using PCA, so there are no null values). It has also been randomly sampled to 10% of its original number of records, to reduce the file size.
###Code
target_col = 'Class'
time_col = 'Time'
amt_col = 'Amount'
# Ready in data
df = pd.read_csv('dummy_data/creditcard.csv')
# Sort data by time ascending
df.sort_values(time_col, ascending=True)
# Create X and y dataframes
X = df.drop([target_col, time_col], axis=1)
y = df[target_col]
X_train_raw, X_test_raw, y_train, y_test = train_test_split(
X,
y,
test_size=0.33,
random_state=42
)
###Output
_____no_output_____
###Markdown
To calculate the **Revenue**, we need the monetary amount of each transaction - we'll use these later:
###Code
amts_train = X_train_raw[amt_col]
amts_test = X_test_raw[amt_col]
###Output
_____no_output_____
###Markdown
Process data Let's impute the null values with the mean:
###Code
imputer = SimpleImputer(strategy='mean')
X_train = pd.DataFrame(
imputer.fit_transform(X_train_raw),
columns=X_train_raw.columns,
index=X_train_raw.index
)
X_test = pd.DataFrame(
imputer.transform(X_test_raw),
columns=X_test_raw.columns,
index=X_test_raw.index
)
# Check nulls have been imputed
X_train.isna().sum().sum(), X_test.isna().sum().sum()
###Output
_____no_output_____
###Markdown
Existing rules Let's also assume we have the following existing rules, stored in the standard Iguanas string format:
###Code
reject_rule_strings = {
"ExistingReject1": "((X['V1']<0)|(X['V1'].isna()))&(X['V3']<1)",
"ExistingReject2": "(X['V2']>3)",
}
approve_rule_strings = {
"ExistingApprove1": "(X['V1']>0)&(X['V3']>1)",
"ExistingApprove2": "(X['V2']<3)",
"ExistingApprove3": "(X['V4']<3)"
}
###Output
_____no_output_____
###Markdown
We can create a `Rules` class for each of these:
###Code
reject_rules = Rules(rule_strings=reject_rule_strings)
approve_rules = Rules(rule_strings=approve_rule_strings)
###Output
_____no_output_____
###Markdown
Then convert them to the standard Iguanas lambda expression format (we'll need this for the optimisation step):
###Code
reject_rule_lambdas = reject_rules.as_rule_lambdas(
as_numpy=False,
with_kwargs=True
)
approve_rule_lambdas = approve_rules.as_rule_lambdas(
as_numpy=False,
with_kwargs=True
)
###Output
_____no_output_____
###Markdown
---- Set up pipeline Before we can apply the `BayesSearchCV` class, we need to set up our pipeline. To create the workflow shown at the beginning of the notebook, we must use a combination of `LinearPipeline` and `ParallelPipeline` classes as shown below:  Let's begin building the **Reject *LinearPipeline***. Reject *LinearPipeline* Let's first instantiate the classes that we'll use in the pipeline:
###Code
# F1 Score
f1 = FScore(beta=1)
# Precision
p = Precision()
# Rule generation
reject_gen = RuleGeneratorDT(
metric=f1.fit,
n_total_conditions=2,
tree_ensemble=RandomForestClassifier(
n_estimators=10,
random_state=0
),
target_feat_corr_types='Infer',
rule_name_prefix='Reject' # Set this so generated reject rules distinguishable from approve rules
)
# Rule optimisation (for generated rules)
reject_gen_opt = BayesianOptimiser(
rule_lambdas=ClassAccessor(
class_tag='reject_gen',
class_attribute='rule_lambdas'
),
lambda_kwargs=ClassAccessor(
class_tag='reject_gen',
class_attribute='lambda_kwargs'
),
metric=f1.fit,
n_iter=10
)
# Rule optimisation (for existing rules)
reject_opt = BayesianOptimiser(
rule_lambdas=reject_rule_lambdas,
lambda_kwargs=reject_rules.lambda_kwargs,
metric=f1.fit,
n_iter=10
)
# Rule filter (performance-based)
reject_sf = SimpleFilter(
threshold=0.1,
operator='>=',
metric=f1.fit
)
# Rule filter (correlation-based)
js = JaccardSimilarity()
reject_cf = CorrelatedFilter(
correlation_reduction_class=AgglomerativeClusteringReducer(
threshold=0.9,
strategy='top_down',
similarity_function=js.fit,
metric=f1.fit
),
rules=ClassAccessor(
class_tag='reject_gen',
class_attribute='rules'
)
)
###Output
_____no_output_____
###Markdown
Now we can create our **Reject Rule Generation *LinearPipeline***. Note that we pass the tag for the optimisation of the generated rules to the `use_init_data` parameter, so that the feature set is passed to the `BayesianOptimiser` class, rather than the output from the `RuleGeneratorDT`:
###Code
reject_gen_lp = LinearPipeline(
steps = [
('reject_gen', reject_gen),
('reject_gen_opt', reject_gen_opt),
],
use_init_data=['reject_gen_opt']
)
###Output
_____no_output_____
###Markdown
And then our **Reject *ParallelPipeline*** (noting that one of the steps in this pipeline is the **Reject Rule Generation *LinearPipeline*** created above):
###Code
reject_pp = ParallelPipeline(
steps = [
('reject_gen_lp', reject_gen_lp),
('reject_opt', reject_opt),
]
)
###Output
_____no_output_____
###Markdown
And then finally, our **Reject *LinearPipeline***:
###Code
reject_lp = LinearPipeline(
steps = [
('reject_pp', reject_pp),
('reject_sf', reject_sf),
('reject_cf', reject_cf)
]
)
###Output
_____no_output_____
###Markdown
Now we can do the same for the **Approve *LinearPipeline***: Approve *LinearPipeline* Let's first instantiate the classes that we'll use in the pipeline:
###Code
# Rule generation
approve_gen = RuleGeneratorDT(
metric=f1.fit,
n_total_conditions=2,
tree_ensemble=RandomForestClassifier(
n_estimators=10,
random_state=0
),
target_feat_corr_types='Infer',
rule_name_prefix='Approve' # Set this so generated reject rules distinguishable from approve rules
)
# Rule optimisation (for generated rules)
approve_gen_opt = BayesianOptimiser(
rule_lambdas=ClassAccessor(
class_tag='approve_gen',
class_attribute='rule_lambdas'
),
lambda_kwargs=ClassAccessor(
class_tag='approve_gen',
class_attribute='lambda_kwargs'
),
metric=f1.fit,
n_iter=10
)
# Rule optimisation (for existing rules)
approve_opt = BayesianOptimiser(
rule_lambdas=approve_rule_lambdas,
lambda_kwargs=approve_rules.lambda_kwargs,
metric=f1.fit,
n_iter=10
)
# Rule filter (performance-based)
approve_sf = SimpleFilter(
threshold=0.1,
operator='>=',
metric=f1.fit
)
# Rule filter (correlation-based)
js = JaccardSimilarity()
approve_cf = CorrelatedFilter(
correlation_reduction_class=AgglomerativeClusteringReducer(
threshold=0.9,
strategy='top_down',
similarity_function=js.fit,
metric=f1.fit
),
rules=ClassAccessor(
class_tag='approve_gen',
class_attribute='rules'
)
)
###Output
_____no_output_____
###Markdown
Now we can create our **Approve Rule Generation *LinearPipeline***. Note that we pass the tag for the optimisation of the generated rules to the `use_init_data` parameter, so that the feature set is passed to the `BayesianOptimiser` class, rather than the output from the `RuleGeneratorDT`:
###Code
approve_gen_lp = LinearPipeline(
steps = [
('approve_gen', approve_gen),
('approve_gen_opt', approve_gen_opt),
],
use_init_data=['approve_gen_opt']
)
###Output
_____no_output_____
###Markdown
And then our **Approve *ParallelPipeline*** (noting that one of the steps in this pipeline is the **Approve Rule Generation *LinearPipeline*** created above):
###Code
approve_pp = ParallelPipeline(
steps = [
('approve_gen_lp', approve_gen_lp),
('approve_opt', approve_opt),
]
)
###Output
_____no_output_____
###Markdown
And then finally, our **Approve *LinearPipeline***:
###Code
approve_lp = LinearPipeline(
steps = [
('approve_pp', approve_pp),
('approve_sf', approve_sf),
('approve_cf', approve_cf)
]
)
###Output
_____no_output_____
###Markdown
Now we can move on to constructing the **Overall Pipelines:** Overall Pipelines First, we'll construct our **Overall *ParallelPipeline*** using the **Reject *LinearPipeline*** and **Approve *LinearPipeline***:
###Code
overall_pp = ParallelPipeline(
steps = [
('reject_lp', reject_lp),
('approve_lp', approve_lp)
]
)
###Output
_____no_output_____
###Markdown
Now we can instantiate the decision engine optimiser. Since we have a constraint on the decision engine (if any approve rules fire, approve the transaction; else if any reject rules fire, reject the transaction; else approve the transaction), we pass the rules remaining after the filtering stages to the relevant elements in the `config` parameter of the `RBSPipeline` class, using the `ClassAccessor` class:
###Code
# Decision engine optimisation metric
opt_metric = Revenue(
y_type='Fraud',
chargeback_multiplier=3
)
# Decision engine (to be optimised)
rbs_pipeline = RBSPipeline(
config=[
[
0, ClassAccessor( # If any approve rules fire, approve
class_tag='approve_cf',
class_attribute='rules_to_keep'
),
],
[
1, ClassAccessor( # Else if any reject rules fire, reject
class_tag='reject_cf',
class_attribute='rules_to_keep'
)
],
],
final_decision=0 # Else approve
)
# Decision engine optimiser
rbs_optimiser = RBSOptimiser(
pipeline=rbs_pipeline,
metric=opt_metric.fit,
rules=ClassAccessor(
class_tag='overall_pp',
class_attribute='rules'
),
n_iter=10
)
###Output
_____no_output_____
###Markdown
Finally, we can instantiate our **Overall *LinearPipeline***:
###Code
overall_lp = LinearPipeline(
steps=[
('overall_pp', overall_pp),
('rbs_optimiser', rbs_optimiser)
]
)
###Output
_____no_output_____
###Markdown
Define the search space Now we need to define the search space for each of the relevant parameters of our pipeline. **Note:** this example does not search across all hyperparameters - you should define your own search spaces based on your use case.To do this, we create a dictionary, where each key corresponds to the tag used for the relevant pipeline step. Each value should be a dictionary of the parameters (keys) and their search spaces (values). Search spaces should be defined using the classes in the `iguanas.space` module:
###Code
# Define additional FScores
f0dot5 = FScore(beta=0.5)
f0dot25 = FScore(beta=0.25)
search_spaces = {
'reject_gen': {
'n_total_conditions': UniformInteger(2, 7),
},
'reject_gen_opt': {
'metric': Choice([f0dot25.fit, f0dot5.fit, f1.fit]),
},
'reject_sf': {
'threshold': UniformFloat(0, 1),
},
'reject_cf': {
'correlation_reduction_class': Choice(
[
AgglomerativeClusteringReducer(
threshold=0.9,
strategy='top_down',
similarity_function=js.fit,
metric=f1.fit
),
AgglomerativeClusteringReducer(
threshold=0.95,
strategy='top_down',
similarity_function=js.fit,
metric=f1.fit
)
]
)
},
'approve_gen': {
'n_total_conditions': UniformInteger(2, 7),
},
'approve_gen_opt': {
'metric': Choice([f0dot25.fit, f0dot5.fit, f1.fit]),
},
'approve_sf': {
'threshold': UniformFloat(0, 1),
},
'approve_cf': {
'correlation_reduction_class': Choice(
[
AgglomerativeClusteringReducer(
threshold=0.9,
strategy='top_down',
similarity_function=js.fit,
metric=f1.fit
),
AgglomerativeClusteringReducer(
threshold=0.95,
strategy='top_down',
similarity_function=js.fit,
metric=f1.fit
)
]
)
}
}
###Output
_____no_output_____
###Markdown
Optimise the pipeline hyperparameters Now that we have our pipeline and search spaces defined, we can instantiate the `BayesSearchCV` class. We'll split our data into 3 cross-validation datasets and try 10 different parameter sets.**Note:** since we're using the `Revenue` as the scoring metric for the `BayesSearchCV` class, we need to set the `sample_weight_in_val` parameter to `True`. This ensures that the `sample_weight` passed to the final step in the pipeline is used when applying the `metric` function to the prediction of each validation set (for `Revenue`, the `sample_weight` corresponds to the monetary amount of each transaction, which is required).
###Code
bs = BayesSearchCV(
pipeline=overall_lp,
search_spaces=search_spaces,
metric=opt_metric.fit, # Use the same metric as the RBSOptimiser
cv=3,
n_iter=10,
num_cores=3,
error_score=0,
verbose=1,
sample_weight_in_val=True # Set to True
)
###Output
_____no_output_____
###Markdown
Finally, we can run the `fit` method to optimise the hyperparameters of the pipeline. **Note the following:** * The existing rules contain conditions that rely on unprocessed data (in this case, there are conditions that check for nulls). So for the rule optimisation steps, we must use the unprocessed training data `X_train_raw`; for the rule generation steps, we must use the processed training data `X_train`.* Since we're generating and optimising rules that flag both positive and negative cases (i.e. reject and approve rules in this example), we need to specify what the target is in each case. For the reject rules, we can just use `y_train`, however for the approve rules, we need to flip `y_train` (so that the rule generator and rule optimisers target the negative cases).* We need the `amts_train` to be passed to the `sample_weight` parameter of the `RBSOptimiser`, as we're optimising the decision engine for the `Revenue`.
###Code
bs.fit(
X={
'reject_lp': X_train, # Use processed features for rule generation
'reject_opt': X_train_raw, # Use raw features for optimising existing rules
'approve_lp': X_train, # Use processed features for rule generation
'approve_opt': X_train_raw # Use raw features for optimising existing rules
},
y={
'reject_lp': y_train, # Use target for Reject LinearPipeline
'approve_lp': 1-y_train, # Flip target for Approve LinearPipeline
'rbs_optimiser': y_train # Use target for RBSOptimiser
},
sample_weight={
'reject_lp': None, # No sample_weight for Reject LinearPipeline
'approve_lp': None, # No sample_weight for Approve LinearPipeline
'rbs_optimiser': amts_train # sample_weight for RBSOptimiser
}
)
###Output
--- Optimising pipeline parameters ---
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [01:13<00:00, 7.37s/trial, best loss: -555025.6433333332]
--- Refitting on entire dataset with best pipeline ---
###Markdown
Outputs The `fit` method doesn't return anything. See the `Attributes` section in the class docstring for a description of each attribute generated:
###Code
bs.best_score
bs.best_params
bs.best_index
bs.cv_results.head()
###Output
_____no_output_____
###Markdown
To see the final optimised decision engine configuration and rule set, we first return the parameters of the trained pipeline (stored in the attribute `pipeline_`):
###Code
pipeline_params = bs.pipeline_.get_params()
###Output
_____no_output_____
###Markdown
Then, to see the final optimised decision engine configuration, we filter to the `config` parameter of the `rbs_optimiser` step:
###Code
final_config = pipeline_params['rbs_optimiser']['config']
final_config
###Output
_____no_output_____
###Markdown
This shows us which rules should be used for the approval step (decision `0`) and which rules should be used for the rejection step (decision `1`). To see the logic of our final set of rules, we filter to the `rules` parameter of the `rbs_optimiser` step:
###Code
final_rules = bs.pipeline_.get_params()['rbs_optimiser']['rules']
###Output
_____no_output_____
###Markdown
Then extract the `rule_strings` attribute:
###Code
final_rules.rule_strings
###Output
_____no_output_____
###Markdown
Apply the optimised pipeline We can apply our optimised pipeline to a new data set and make a prediction using the `predict` method:
###Code
y_pred_test = bs.predict(X_test)
###Output
_____no_output_____
###Markdown
Outputs The `predict` method returns the prediction generated by class in the final step of the pipeline - in this case, the `RBSOptimiser`:
###Code
y_pred_test
###Output
_____no_output_____
###Markdown
We can now calculate the **Revenue** of our optimised pipeline using the test data:
###Code
rev_opt = opt_metric.fit(
y_preds=y_pred_test,
y_true=y_test,
sample_weight=amts_test
)
###Output
_____no_output_____
###Markdown
Comparing this to our original, unoptimised pipeline:
###Code
overall_lp.fit(
X={
'reject_gen_lp': X_train,
'reject_opt': X_train_raw,
'approve_gen_lp': X_train,
'approve_opt': X_train_raw
},
y={
'reject_lp': y_train,
'approve_lp': 1-y_train,
'rbs_optimiser': y_train
},
sample_weight={
'reject_lp': None,
'approve_lp': None,
'rbs_optimiser': y_train
}
)
y_pred_test_init = overall_lp.predict(X_test)
rev_init = opt_metric.fit(
y_preds=y_pred_test_init,
y_true=y_test,
sample_weight=amts_test
)
print(f'Revenue of original, unoptimised pipeline: ${round(rev_init)}')
print(f'Revenue of optimised pipeline: ${round(rev_opt)}')
print(f'Absolute improvement in Revenue: ${round(rev_opt-rev_init)}')
print(f'Percentage improvement in Revenue: {round(100*(rev_opt-rev_init)/rev_init, 2)}%')
###Output
Revenue of original, unoptimised pipeline: $775698
Revenue of optimised pipeline: $856076
Absolute improvement in Revenue: $80379
Percentage improvement in Revenue: 10.36%
|
notebooks/structure_utils_tests.ipynb | ###Markdown
Testing the Functionality of the "utils.py" Module
###Code
""" Tetsing the following functionality:
* metrics (rmsd, gdt_ts, gdt_ha, tmscore)
* alignment (kabsch)
* 3d coords (mds)
-----
The data files used contain the id of the original
crystal structures from the RCSB PDB
"""
import os
import sys
# science
import numpy as np
import torch
import matplotlib.pyplot as plt
# molecular utils
import mdtraj
# functionality
sys.path.append("../")
from alphafold2_pytorch.utils import *
# load pdb file - has 1 more N_term than it should
prot = mdtraj.load_pdb("data/1h22_protein_chain_1.pdb").xyz[0].transpose()
###Output
_____no_output_____
###Markdown
Metrics
###Code
# alter a small amount and measure metrics
pred = prot + (2*np.random.rand(*prot.shape) - 1) * 1
# Numpy
rmsd = RMSD(prot, pred)
gdt_ha = GDT(prot, pred, mode="HA")
gdt_ts = GDT(prot, pred, mode="TS")
tm_score = TMscore(prot, pred)
print("rmsd is: ", rmsd)
print("gdt_ha is: ", gdt_ha)
print("gdt_ts is: ", gdt_ts)
print("tm_score is: ", tm_score)
# Torch
prot, pred = torch.tensor(prot), torch.tensor(pred)
rmsd = RMSD(prot, pred)
gdt_ha = GDT(prot, pred, mode="HA")
gdt_ts = GDT(prot, pred, mode="TS")
tm_score = TMscore(prot, pred)
print("rmsd is: ", rmsd)
print("gdt_ha is: ", gdt_ha)
print("gdt_ts is: ", gdt_ts)
print("tm_score is: ", tm_score)
###Output
rmsd is: [0.57698405]
gdt_ha is: [0.64710047]
gdt_ts is: [0.8803439]
tm_score is: [0.99800815]
rmsd is: tensor([0.5770], dtype=torch.float64)
gdt_ha is: tensor([0.6471])
gdt_ts is: tensor([0.8803])
tm_score is: tensor([0.9980], dtype=torch.float64)
###Markdown
Alignment
###Code
prot = prot.cpu().numpy()
pred = pred.cpu().numpy()
# rotation matrix
R = np.array([[0.25581, -0.77351, 0.57986],
[-0.85333, -0.46255, -0.24057],
[0.45429, -0.43327, -0.77839]])
# perturb protein (translation + rotation + random)
pred = prot + (2*np.random.rand(*prot.shape) - 1) * 1
pred = np.dot(R, pred)
# check realignment works - torch
pred_mod_, prot_mod_ = kabsch_torch(torch.tensor(pred).double(), torch.tensor(prot).double())
rmsd_torch(prot_mod_, pred_mod_), tmscore_torch(prot_mod_, pred_mod_)
# check realignment works - numpy
pred_mod, prot_mod = kabsch_numpy(pred, prot)
rmsd_numpy(prot_mod, pred_mod), tmscore_numpy(prot_mod, pred_mod)
###Output
_____no_output_____
###Markdown
3d Converter
###Code
prot_traj = mdtraj.load_pdb("data/1h22_protein_chain_1.pdb")
prot = prot_traj.xyz[0].transpose()
# works with a simple distance matrix for now
prot = torch.tensor(prot)
dist_mat = torch.cdist(prot.t(), prot.t())
# plt.imshow(distogram, cmap="viridis_r")
# select indices of backbone for angle calculation and selection
N_mask = torch.tensor( prot_traj.topology.select("name == N and backbone") ).unsqueeze(0)
CA_mask = torch.tensor( prot_traj.topology.select("name == CA and backbone") ).unsqueeze(0)
C_mask = torch.tensor( prot_traj.topology.select("name == C and backbone") ).unsqueeze(0)
CA_mask.shape, N_mask.shape, C_mask.shape
preds, stresses = MDScaling(torch.cat([dist_mat.cpu().unsqueeze(0)]*3, dim=0),
iters=2, tol=1e-5, fix_mirror=1,
N_mask=N_mask, CA_mask=CA_mask, C_mask=C_mask, verbose=2)
preds, stresses = MDScaling(dist_mat.cpu(), iters=5, tol=1e-5, fix_mirror=1,
N_mask=N_mask, CA_mask=CA_mask, C_mask=C_mask, verbose=2)
pred, stress = preds[0], stresses[0]
# check realignment works
pred_mod, prot_mod = Kabsch(pred.numpy(), prot.numpy())
# measure
rmsd = RMSD(prot_mod, pred_mod)
gdt_ha = GDT(prot_mod, pred_mod, mode="HA")
gdt_ts = GDT(prot_mod, pred_mod, mode="TS")
tm_score = TMscore(prot_mod, pred_mod)
print("rmsd is: ", rmsd)
print("gdt_ha is: ", gdt_ha)
print("gdt_ts is: ", gdt_ts)
print("tm_score is: ", tm_score)
###Output
rmsd is: [1.6815614]
gdt_ha is: [0.34099798]
gdt_ts is: [0.58563722]
tm_score is: [0.98357874]
###Markdown
See reconstruction
###Code
new_dist_mat = torch.cdist(pred.t(), pred.t())
delta_dist = new_dist_mat - dist_mat
fig, ax = plt.subplots(1,3,figsize=(9, 3), sharey=True)
cmap = plt.get_cmap("viridis_r")
ax[0].set_title("Original")
ax[0].imshow(dist_mat, cmap="viridis_r")
ax[1].set_title("Reconstructed")
ax[1].imshow(new_dist_mat, cmap="viridis_r")
ax[2].set_title("Difference")
ax[2].imshow(delta_dist, cmap="viridis_r")
print("Diffs: max = {0} and min {1}".format(np.amax(delta_dist.numpy()),
np.amin(delta_dist.numpy()) ))
# save pdb file and check manually here:
# https://molstar.org/viewer/
buffer_save = mdtraj.load("data/1h22_protein_chain_1.pdb")
buffer_save.xyz = pred_mod.T[None, :, :]
buffer_save.save("data/save_to_check.pdb")
# save pdb file and check manually here:
# https://molstar.org/viewer/
buffer_save = mdtraj.load("data/1h22_protein_chain_1.pdb")
buffer_save.xyz = prot_mod.T[None, :, :]
buffer_save.save("data/save_to_check_base.pdb")
###Output
_____no_output_____
###Markdown
Testing the Functionality of the "utils.py" Module
###Code
""" Tetsing the following functionality:
* metrics (rmsd, gdt_ts, gdt_ha, tmscore)
* alignment (kabsch)
* 3d coords (mds)
-----
The data files used contain the id of the original
crystal structures from the RCSB PDB
"""
import os
import sys
# science
import numpy as np
import torch
import matplotlib.pyplot as plt
# molecular utils
import mdtraj
# functionality
sys.path.append("../")
from alphafold2_pytorch.utils import *
# load pdb file - has 1 more N_term than it should
prot = mdtraj.load_pdb("data/1h22_protein_chain_1.pdb").xyz[0].transpose()
###Output
_____no_output_____
###Markdown
Metrics
###Code
# alter a small amount and measure metrics
pred = prot + (2*np.random.rand(*prot.shape) - 1) * 1
# Numpy
rmsd = RMSD(prot, pred)
gdt_ha = GDT(prot, pred, mode="HA")
gdt_ts = GDT(prot, pred, mode="TS")
tm_score = TMscore(prot, pred)
print("rmsd is: ", rmsd)
print("gdt_ha is: ", gdt_ha)
print("gdt_ts is: ", gdt_ts)
print("tm_score is: ", tm_score)
# Torch
prot, pred = torch.tensor(prot), torch.tensor(pred)
rmsd = RMSD(prot, pred)
gdt_ha = GDT(prot, pred, mode="HA")
gdt_ts = GDT(prot, pred, mode="TS")
tm_score = TMscore(prot, pred)
print("rmsd is: ", rmsd)
print("gdt_ha is: ", gdt_ha)
print("gdt_ts is: ", gdt_ts)
print("tm_score is: ", tm_score)
###Output
rmsd is: [0.5763211]
gdt_ha is: [0.6478085]
gdt_ts is: [0.88125421]
tm_score is: [0.99801268]
rmsd is: tensor([0.5763], dtype=torch.float64)
gdt_ha is: tensor([0.6478])
gdt_ts is: tensor([0.8813])
tm_score is: tensor([0.9980], dtype=torch.float64)
###Markdown
Alignment
###Code
prot = prot.cpu().numpy()
pred = pred.cpu().numpy()
# rotation matrix
R = np.array([[0.25581, -0.77351, 0.57986],
[-0.85333, -0.46255, -0.24057],
[0.45429, -0.43327, -0.77839]])
# perturb protein (translation + rotation + random)
pred = prot + (2*np.random.rand(*prot.shape) - 1) * 1
pred = np.dot(R, pred)
# check realignment works - torch
pred_mod_, prot_mod_ = kabsch_torch(torch.tensor(pred).double(), torch.tensor(prot).double())
rmsd_torch(prot_mod_, pred_mod_), tmscore_torch(prot_mod_, pred_mod_)
# check realignment works - numpy
pred_mod, prot_mod = kabsch_numpy(pred, prot)
rmsd_numpy(prot_mod, pred_mod), tmscore_numpy(prot_mod, pred_mod)
###Output
_____no_output_____
###Markdown
3d Converter
###Code
prot_traj = mdtraj.load_pdb("data/1h22_protein_chain_1.pdb")
prot = prot_traj.xyz[0].transpose()
# works with a simple distance matrix for now
prot = torch.tensor(prot)
dist_mat = torch.cdist(prot.t(), prot.t())
# plt.imshow(distogram, cmap="viridis_r")
# select indices of backbone for angle calculation and selection
N_mask = prot_traj.topology.select("name == N and backbone")
CA_mask = prot_traj.topology.select("name == CA and backbone")
C_mask = prot_traj.topology.select("name == C and backbone")
CA_mask.shape, N_mask.shape, C_mask.shape
preds, stresses = MDScaling(dist_mat.cpu(), iters=10, tol=1e-5, fix_mirror=1,
N_mask=N_mask, CA_mask=CA_mask, C_mask=C_mask, verbose=2)
pred, stress = preds[0], stresses[0]
# check realignment works
pred_mod, prot_mod = Kabsch(pred.numpy(), prot.numpy())
# measure
rmsd = RMSD(prot_mod, pred_mod)
gdt_ha = GDT(prot_mod, pred_mod, mode="HA")
gdt_ts = GDT(prot_mod, pred_mod, mode="TS")
tm_score = TMscore(prot_mod, pred_mod)
print("rmsd is: ", rmsd)
print("gdt_ha is: ", gdt_ha)
print("gdt_ts is: ", gdt_ts)
print("tm_score is: ", tm_score)
###Output
rmsd is: [1.6588651]
gdt_ha is: [0.35185435]
gdt_ts is: [0.59554956]
tm_score is: [0.98402625]
###Markdown
See reconstruction
###Code
new_dist_mat = torch.cdist(pred.t(), pred.t())
delta_dist = new_dist_mat - dist_mat
fig, ax = plt.subplots(1,3,figsize=(9, 3), sharey=True)
cmap = plt.get_cmap("viridis_r")
ax[0].set_title("Original")
ax[0].imshow(dist_mat, cmap="viridis_r")
ax[1].set_title("Reconstructed")
ax[1].imshow(new_dist_mat, cmap="viridis_r")
ax[2].set_title("Difference")
ax[2].imshow(delta_dist, cmap="viridis_r")
print("Diffs: max = {0} and min {1}".format(np.amax(delta_dist.numpy()),
np.amin(delta_dist.numpy()) ))
# save pdb file and check manually here:
# https://molstar.org/viewer/
buffer_save = mdtraj.load("data/1h22_protein_chain_1.pdb")
buffer_save.xyz = pred_mod.T[None, :, :]
buffer_save.save("data/save_to_check.pdb")
# save pdb file and check manually here:
# https://molstar.org/viewer/
buffer_save = mdtraj.load("data/1h22_protein_chain_1.pdb")
buffer_save.xyz = prot_mod.T[None, :, :]
buffer_save.save("data/save_to_check_base.pdb")
###Output
_____no_output_____
###Markdown
Testing the Functionality of the "utils.py" Module
###Code
""" Tetsing the following functionality:
* metrics (rmsd, gdt_ts, gdt_ha, tmscore)
* alignment (kabsch)
* 3d coords (mds)
-----
The data files used contain the id of the original
crystal structures from the RCSB PDB
"""
import os
import sys
# science
import numpy as np
import torch
import matplotlib.pyplot as plt
# molecular utils
import mdtraj
# functionality
sys.path.append("../")
from alphafold2_pytorch.utils import *
# load pdb file - has 1 more N_term than it should
prot = mdtraj.load_pdb("data/1h22_protein_chain_1.pdb").xyz[0].transpose()
###Output
_____no_output_____
###Markdown
Metrics
###Code
# alter a small amount and measure metrics
pred = prot + (2*np.random.rand(*prot.shape) - 1) * 1
# Numpy
rmsd = RMSD(prot, pred)
gdt_ha = GDT(prot, pred, mode="HA")
gdt_ts = GDT(prot, pred, mode="TS")
tm_score = TMscore(prot, pred)
print("rmsd is: ", rmsd)
print("gdt_ha is: ", gdt_ha)
print("gdt_ts is: ", gdt_ts)
print("tm_score is: ", tm_score)
# Torch
prot, pred = torch.tensor(prot), torch.tensor(pred)
rmsd = RMSD(prot, pred)
gdt_ha = GDT(prot, pred, mode="HA")
gdt_ts = GDT(prot, pred, mode="TS")
tm_score = TMscore(prot, pred)
print("rmsd is: ", rmsd)
print("gdt_ha is: ", gdt_ha)
print("gdt_ts is: ", gdt_ts)
print("tm_score is: ", tm_score)
###Output
rmsd is: [0.57943473]
gdt_ha is: [0.64511126]
gdt_ts is: [0.87869184]
tm_score is: [0.99799123]
rmsd is: tensor([0.5794], dtype=torch.float64)
gdt_ha is: tensor([0.6451])
gdt_ts is: tensor([0.8787])
tm_score is: tensor([0.9980], dtype=torch.float64)
###Markdown
Alignment
###Code
prot = prot.cpu().numpy()
pred = pred.cpu().numpy()
# rotation matrix
R = np.array([[0.25581, -0.77351, 0.57986],
[-0.85333, -0.46255, -0.24057],
[0.45429, -0.43327, -0.77839]])
# perturb protein (translation + rotation + random)
pred = prot + (2*np.random.rand(*prot.shape) - 1) * 1
pred = np.dot(R, pred)
# check realignment works - torch
pred_mod_, prot_mod_ = kabsch_torch(torch.tensor(pred).double(), torch.tensor(prot).double())
rmsd_torch(prot_mod_, pred_mod_), tmscore_torch(prot_mod_, pred_mod_)
# check realignment works - numpy
pred_mod, prot_mod = kabsch_numpy(pred, prot)
rmsd_numpy(prot_mod, pred_mod), tmscore_numpy(prot_mod, pred_mod)
###Output
_____no_output_____
###Markdown
3d Converter
###Code
prot_traj = mdtraj.load_pdb("data/1h22_protein_chain_1.pdb")
prot = prot_traj.xyz[0].transpose()
# works with a simple distance matrix for now
prot = torch.tensor(prot)
dist_mat = torch.cdist(prot.t(), prot.t())
# plt.imshow(distogram, cmap="viridis_r")
# select indices of backbone for angle calculation and selection
N_mask = torch.tensor( prot_traj.topology.select("name == N and backbone") ).unsqueeze(0)
CA_mask = torch.tensor( prot_traj.topology.select("name == CA and backbone") ).unsqueeze(0)
C_mask = torch.tensor( prot_traj.topology.select("name == C and backbone") ).unsqueeze(0)
CA_mask.shape, N_mask.shape, C_mask.shape
preds, stresses = MDScaling(torch.cat([dist_mat.cpu().unsqueeze(0)]*3, dim=0),
iters=2, tol=1e-5, fix_mirror=1,
N_mask=N_mask, CA_mask=CA_mask, C_mask=C_mask, verbose=2)
preds, stresses = MDScaling(dist_mat.cpu(), iters=5, tol=1e-5, fix_mirror=1,
N_mask=N_mask, CA_mask=CA_mask, C_mask=C_mask, verbose=2)
pred, stress = preds[0], stresses[0]
# check realignment works
pred_mod, prot_mod = Kabsch(pred.numpy(), prot.numpy())
# measure
rmsd = RMSD(prot_mod, pred_mod)
gdt_ha = GDT(prot_mod, pred_mod, mode="HA")
gdt_ts = GDT(prot_mod, pred_mod, mode="TS")
tm_score = TMscore(prot_mod, pred_mod)
print("rmsd is: ", rmsd)
print("gdt_ha is: ", gdt_ha)
print("gdt_ts is: ", gdt_ts)
print("tm_score is: ", tm_score)
###Output
rmsd is: [1.6815614]
gdt_ha is: [0.34099798]
gdt_ts is: [0.58563722]
tm_score is: [0.98357874]
###Markdown
See reconstruction
###Code
new_dist_mat = torch.cdist(pred.t(), pred.t())
delta_dist = new_dist_mat - dist_mat
fig, ax = plt.subplots(1,3,figsize=(9, 3), sharey=True)
cmap = plt.get_cmap("viridis_r")
ax[0].set_title("Original")
ax[0].imshow(dist_mat, cmap="viridis_r")
ax[1].set_title("Reconstructed")
ax[1].imshow(new_dist_mat, cmap="viridis_r")
ax[2].set_title("Difference")
ax[2].imshow(delta_dist, cmap="viridis_r")
print("Diffs: max = {0} and min {1}".format(np.amax(delta_dist.numpy()),
np.amin(delta_dist.numpy()) ))
# save pdb file and check manually here:
# https://molstar.org/viewer/
buffer_save = mdtraj.load("data/1h22_protein_chain_1.pdb")
buffer_save.xyz = pred_mod.T[None, :, :]
buffer_save.save("data/save_to_check.pdb")
# save pdb file and check manually here:
# https://molstar.org/viewer/
buffer_save = mdtraj.load("data/1h22_protein_chain_1.pdb")
buffer_save.xyz = prot_mod.T[None, :, :]
buffer_save.save("data/save_to_check_base.pdb")
###Output
_____no_output_____ |
9.25.20-ClassSession.ipynb | ###Markdown
Working with Lists and TuplesIn this session we looked at more features of lists and introduced tuples.
###Code
#Two lists to use for examples
numbers = [10,54,76,89,43,54,23,21,89,56,45,36,42,19]
names = ['tom','john','sally','rex','george','stella','paul','ringo']
###Output
_____no_output_____
###Markdown
Slicing ListsPython provides several different ways to get "slices" of data from a list. That is, ways to extract particular elements from lists.
###Code
numbers[4] #Can grab a single element.
#len(numbers)
#len(names)
###Output
_____no_output_____
###Markdown
This will select sequential elements from a list. The last number is means it will select 1 less than the end point.
###Code
numbers[3:9]
###Output
_____no_output_____
###Markdown
Start and the beginning and get numbers up to endpoint
###Code
numbers[:3]
###Output
_____no_output_____
###Markdown
Start at index and get numbers to end of list
###Code
numbers[6:]
###Output
_____no_output_____
###Markdown
Step syntax - grab numbers by some step value
###Code
numbers[::2]
###Output
_____no_output_____
###Markdown
You can use the step syntax to reverse a list
###Code
numbers = numbers[::-1]
###Output
_____no_output_____
###Markdown
You can replace slices of the list with new data
###Code
numbers[0:3]=[20,20,20]
numbers[7]=22
###Output
_____no_output_____
###Markdown
You can also replace slices of the list with new data using the step syntax
###Code
numbers[::2] = [10,100,1000,10000,100000,10000000,100000000]
numbers
###Output
_____no_output_____
###Markdown
Slicing works regardless of data types.
###Code
names[0:3]
names[3:]
names[::2] = [10.76,'Alex','Chris','Carl']
names
###Output
_____no_output_____
###Markdown
Concatenate ListsIf you have two or more lists, you can "add" them together to make a larger list.
###Code
unified = numbers + names
for i in range(len(unified)):
print(unified[i], end=' ')
unified = unified + [('Bellarmine','UofL','Center')] #data is a tuple
unified
###Output
_____no_output_____
###Markdown
TuplesTuples are another data structure you can use that are similar to lists. The main difference is, List elements are mutable. Tuple elements are immutable. You use () to delineate a tuple.
###Code
data = ('George', 1964, 'Beatles')
type(names)
type(data)
data[0:2]
###Output
_____no_output_____
###Markdown
The code below will throw an error if you run it. Elements in a tuple are immutable.
###Code
#data[0]='John'
data = ('John', 1964, 'Beatles')
###Output
_____no_output_____
###Markdown
UnpackingUnpacking is a nice way to be able to pull data out of a tuple and assign each element to an individual variable. The first cell below shows how we would do this without unpacking.
###Code
name = data[0]
year = data[1]
band = data[2]
print(name, year, band)
###Output
_____no_output_____
###Markdown
A more convenient approach is to unpack. Simply provide a comma-delimited list of variables set equal to the tuple. Python will automatically unpack it.
###Code
name, year, band = data
print(name)
###Output
_____no_output_____
###Markdown
You can also create lists of tuples (and tuples with lists in them).
###Code
beatles = [('John', 1964, 'Beatles'), ('Paul', 1964, 'Beatles'),
('George', 1964, 'Beatles'), ('Ringo', 1964, 'Beatles')]
beatles
###Output
_____no_output_____
###Markdown
Using indices, we can pull out any entry in the "table"
###Code
beatles[2][2]
###Output
_____no_output_____
###Markdown
And we can use unpacking to get a particular tuple in the list and assign its elements to multiple variables.
###Code
name, year, band = beatles[3]
name
###Output
_____no_output_____ |
Visualize Programming Language Popularity using tiobeindexpy.ipynb | ###Markdown
Top 20 Based on Ratings
###Code
sns.barplot('Ratings', 'Programming Language', data = top_20).set_title('Mar 2019 - Programming Popularity')
###Output
_____no_output_____
###Markdown
Biggest Gainers in a month (from Top 20)
###Code
top_20['Change.1'] = top_20.loc[:,'Change.1'].apply(lambda x: float(x.strip("%")))
sns.barplot('Programming Language', 'Change.1',
data = top_20.sort_values("Change.1",ascending = False)[0:5]).set_title('Mar 2018 vs 2019 - Language Popularity - Biggest Gainers from Top 20')
###Output
_____no_output_____
###Markdown
Biggest Gainers in a month (from Top 20)
###Code
sns.barplot('Change.1', 'Programming Language',
data = top_20.sort_values("Change.1",ascending = True)[0:5]).set_title('Mar 2018 vs 2019 - Language Popularity - Biggest Losers from Top 20')
###Output
_____no_output_____
###Markdown
Hall of Fame - Last 15 years
###Code
hof = tbpy.hall_of_fame()
hof.style.set_properties(**{'background-color': 'black',
'color': 'lawngreen',
'border-color': 'white'})
###Output
_____no_output_____ |
Lab-3/My Solutions/RL.ipynb | ###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 3: Reinforcement LearningReinforcement learning (RL) is a subset of machine learning which poses learning problems as interactions between agents and environments. It often assumes agents have no prior knowledge of a world, so they must learn to navigate environments by optimizing a reward function. Within an environment, an agent can take certain actions and receive feedback, in the form of positive or negative rewards, with respect to their decision. As such, an agent's feedback loop is somewhat akin to the idea of "trial and error", or the manner in which a child might learn to distinguish between "good" and "bad" actions.In practical terms, our RL agent will interact with the environment by taking an action at each timestep, receiving a corresponding reward, and updating its state according to what it has "learned". While the ultimate goal of reinforcement learning is to teach agents to act in the real, physical world, games provide a convenient proving ground for developing RL algorithms and agents. Games have some properties that make them particularly well suited for RL: 1. In many cases, games have perfectly describable environments. For example, all rules of chess can be formally written and programmed into a chess game simulator;2. Games are massively parallelizable. Since they do not require running in the real world, simultaneous environments can be run on large data clusters; 3. Simpler scenarios in games enable fast prototyping. This speeds up the development of algorithms that could eventually run in the real-world; and4. ... Games are fun! In previous labs, we have explored both supervised (with LSTMs, CNNs) and unsupervised / semi-supervised (with VAEs) learning tasks. Reinforcement learning is fundamentally different, in that we are training a deep learning algorithm to govern the actions of our RL agent, that is trying, within its environment, to find the optimal way to achieve a goal. The goal of training an RL agent is to determine the best next step to take to earn the greatest final payoff or return. In this lab, we focus on building a reinforcement learning algorithm to master two different environments with varying complexity. 1. **Cartpole**: Balance a pole, protruding from a cart, in an upright position by only moving the base left or right. Environment with a low-dimensional observation space.2. [**Pong**](https://en.wikipedia.org/wiki/Pong): Beat your competitors (whether other AI or humans!) at the game of Pong. Environment with a high-dimensional observation space -- learning directly from raw pixels.Let's get started! First we'll import TensorFlow, the course package, and some dependencies.
###Code
!apt-get install -y xvfb python-opengl x11-utils > /dev/null 2>&1
!pip install gym pyvirtualdisplay scikit-video > /dev/null 2>&1
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
import base64, io, time, gym
import IPython, functools
import matplotlib.pyplot as plt
from tqdm import tqdm
!pip install mitdeeplearning
import mitdeeplearning as mdl
###Output
TensorFlow 2.x selected.
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 31.9MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.2)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.38.0)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.12.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114586 sha256=8323c13d20dd612bd264c49f7de140cf8f1b0d43d80841612cf18054a4f0517e
Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.1.2
###Markdown
Before we dive in, let's take a step back and outline our approach, which is generally applicable to reinforcement learning problems in general:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define a reward function**: describes the reward associated with an action or sequence of actions.4. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors. Part 1: Cartpole 3.1 Define the Cartpole environment and agent Environment In order to model the environment for both the Cartpole and Pong tasks, we'll be using a toolkit developed by OpenAI called [OpenAI Gym](https://gym.openai.com/). It provides several pre-defined environments for training and testing reinforcement learning agents, including those for classic physics control tasks, Atari video games, and robotic simulations. To access the Cartpole environment, we can use `env = gym.make("CartPole-v0")`, which we gained access to when we imported the `gym` package. We can instantiate different [environments](https://gym.openai.com/envs/classic_control) by passing the enivronment name to the `make` function.One issue we might experience when developing RL algorithms is that many aspects of the learning process are inherently random: initializing game states, changes in the environment, and the agent's actions. As such, it can be helpful to set a initial "seed" for the environment to ensure some level of reproducibility. Much like you might use `numpy.random.seed`, we can call the comparable function in gym, `seed`, with our defined environment to ensure the environment's random variables are initialized the same each time.
###Code
### Instantiate the Cartpole environment ###
env = gym.make("CartPole-v0")
env.seed(1)
###Output
_____no_output_____
###Markdown
In Cartpole, a pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pole starts upright, and the goal is to prevent it from falling over. The system is controlled by applying a force of +1 or -1 to the cart. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center of the track. A visual summary of the cartpole environment is depicted below:Given this setup for the environment and the objective of the game, we can think about: 1) what observations help define the environment's state; 2) what actions the agent can take. First, let's consider the observation space. In this Cartpole environment our observations are:1. Cart position2. Cart velocity3. Pole angle4. Pole rotation rateWe can confirm the size of the space by querying the environment's observation space:
###Code
n_observations = env.observation_space
print("Environment has observation space =", n_observations)
###Output
Environment has observation space = Box(4,)
###Markdown
Second, we consider the action space. At every time step, the agent can move either right or left. Again we can confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
Number of possible actions that the agent can choose from = 2
###Markdown
Cartpole agentNow that we have instantiated the environment and understood the dimensionality of the observation and action spaces, we are ready to define our agent. In deep reinforcement learning, a deep neural network defines the agent. This network will take as input an observation of the environment and output the probability of taking each of the possible actions. Since Cartpole is defined by a low-dimensional observation space, a simple feed-forward neural network should work well for our agent. We will define this using the `Sequential` API.
###Code
### Define the Cartpole agent ###
# Defines a feed-forward neural network
def create_cartpole_model():
model = tf.keras.models.Sequential([
# First Dense layer
tf.keras.layers.Dense(units=32, activation='relu'),
# TODO: Define the last Dense layer, which will provide the network's output.
# Think about the space the agent needs to act in!
tf.keras.layers.Dense(units=n_actions, activation=None)
])
return model
cartpole_model = create_cartpole_model()
###Output
_____no_output_____
###Markdown
Now that we have defined the core network architecture, we will define an *action function* that executes a forward pass through the network, given a set of observations, and samples from the output. This sampling from the output probabilities will be used to select the next action for the agent. **Critically, this action function is totally general -- we will use this function for both Cartpole and Pong, and it is applicable to other RL tasks, as well!**
###Code
### Define the agent's action function ###
# Function that takes observations as input, executes a forward pass through model,
# and outputs a sampled action.
# Arguments:
# model: the network that defines our agent
# observation: observation which is fed as input to the model
# Returns:
# action: choice of agent action
def choose_action(model, observation):
# add batch dimension to the observation
observation = np.expand_dims(observation, axis=0)
'''TODO: feed the observations through the model to predict the log probabilities of each possible action.'''
logits = model.predict(observation)
# pass the log probabilities through a softmax to compute true probabilities
prob_weights = tf.nn.softmax(logits).numpy()
'''TODO: randomly sample from the prob_weights to pick an action.
Hint: carefully consider the dimensionality of the input probabilities (vector) and the output action (scalar)'''
action = np.random.choice(n_actions, size=1, p=prob_weights.flatten())[0]
return action
###Output
_____no_output_____
###Markdown
3.2 Define the agent's memoryNow that we have instantiated the environment and defined the agent network architecture and action function, we are ready to move on to the next step in our RL workflow:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors.In reinforcement learning, training occurs alongside the agent's acting in the environment; an *episode* refers to a sequence of actions that ends in some terminal state, such as the pole falling down or the cart crashing. The agent will need to remember all of its observations and actions, such that once an episode ends, it can learn to "reinforce" the good actions and punish the undesirable actions via training. Our first step is to define a simple memory buffer that contains the agent's observations, actions, and received rewards from a given episode. **Once again, note the modularity of this memory buffer -- it can and will be applied to other RL tasks as well!**
###Code
### Agent Memory ###
class Memory:
def __init__(self):
self.clear()
# Resets/restarts the memory buffer
def clear(self):
self.observations = []
self.actions = []
self.rewards = []
# Add observations, actions, rewards to memory
def add_to_memory(self, new_observation, new_action, new_reward):
self.observations.append(new_observation)
'''TODO: update the list of actions with new action'''
self.actions.append(new_action)
'''TODO: update the list of rewards with new reward'''
self.rewards.append(new_reward)
memory = Memory()
###Output
_____no_output_____
###Markdown
3.3 Reward functionWe're almost ready to begin the learning algorithm for our agent! The next step is to compute the rewards of our agent as it acts in the environment. Since we (and the agent) is uncertain about if and when the game or task will end (i.e., when the pole will fall), it is useful to emphasize getting rewards **now** rather than later in the future -- this is the idea of discounting. This is a similar concept to discounting money in the case of interest. ecall from lecture, we use reward discount to give more preference at getting rewards now rather than later in the future. The idea of discounting rewards is similar to discounting money in the case of interest.To compute the expected cumulative reward, known as the **return**, at a given timestep in a learning episode, we sum the discounted rewards expected at that time step $t$, within a learning episode, and projecting into the future. We define the return (cumulative reward) at a time step $t$, $R_{t}$ as:>$R_{t}=\sum_{k=0}^\infty\gamma^kr_{t+k}$where $0 < \gamma < 1$ is the discount factor and $r_{t}$ is the reward at time step $t$, and the index $k$ increments projection into the future within a single learning episode. Intuitively, you can think of this function as depreciating any rewards received at later time steps, which will force the agent prioritize getting rewards now. Since we can't extend episodes to infinity, in practice the computation will be limited to the number of timesteps in an episode -- after that the reward is assumed to be zero.Take note of the form of this sum -- we'll have to be clever about how we implement this function. Specifically, we'll need to initialize an array of zeros, with length of the number of time steps, and fill it with the real discounted reward values as we loop through the rewards from the episode, which will have been saved in the agents memory. What we ultimately care about is which actions are better relative to other actions taken in that episode -- so, we'll normalize our computed rewards, using the mean and standard deviation of the rewards across the learning episode.
###Code
### Reward function ###
# Helper function that normalizes an np.array x
def normalize(x):
x -= np.mean(x)
x /= np.std(x)
return x.astype(np.float32)
# Compute normalized, discounted, cumulative rewards (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.95):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# update the total discounted reward
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
3.4 Learning algorithmNow we can start to define the learing algorithm which will be used to reinforce good behaviors of the agent and discourage bad behaviours. In this lab, we will focus on *policy gradient* methods which aim to **maximize** the likelihood of actions that result in large rewards. Equivalently, this means that we want to **minimize** the negative likelihood of these same actions. We achieve this by simply **scaling** the probabilities by their associated rewards -- effectively amplifying the likelihood of actions that resujlt in large rewards.Since the log function is monotonically increasing, this means that minimizing **negative likelihood** is equivalent to minimizing **negative log-likelihood**. Recall that we can easily compute the negative log-likelihood of a discrete action by evaluting its [softmax cross entropy](https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits). Like in supervised learning, we can use stochastic gradient descent methods to achieve the desired minimization. Let's begin by defining the loss function.
###Code
### Loss function ###
# Arguments:
# logits: network's predictions for actions to take
# actions: the actions the agent took in an episode
# rewards: the rewards the agent received in an episode
# Returns:
# loss
def compute_loss(logits, actions, rewards):
'''TODO: complete the function call to compute the negative log probabilities'''
neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=actions)
'''TODO: scale the negative log probability by the rewards'''
loss = tf.reduce_mean(rewards*neg_logprob)
return loss
###Output
_____no_output_____
###Markdown
Now let's use the loss function to define a training step of our learning algorithm:
###Code
### Training step (forward and backpropagation) ###
def train_step(model, optimizer, observations, actions, discounted_rewards):
with tf.GradientTape() as tape:
# Forward propagate through the agent network
logits = model(observations)
'''TODO: call the compute_loss function to compute the loss'''
loss = compute_loss(logits, actions, discounted_rewards)
'''TODO: run backpropagation to minimize the loss using the tape.gradient method.
Use `model.trainable_variables`'''
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
###Output
_____no_output_____
###Markdown
3.5 Run cartpole!Having had no prior knowledge of the environment, the agent will begin to learn how to balance the pole on the cart based only on the feedback received from the environment! Having defined how our agent can move, how it takes in new observations, and how it updates its state, we'll see how it gradually learns a policy of actions to optimize balancing the pole as long as possible. To do this, we'll track how the rewards evolve as a function of training -- how should the rewards change as training progresses?
###Code
### Cartpole training! ###
# Learning rate and optimizer
learning_rate = 1e-3
optimizer = tf.keras.optimizers.Adam(learning_rate)
# instantiate cartpole agent
cartpole_model = create_cartpole_model()
# to track our progress
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Rewards')
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for i_episode in range(500):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
memory.clear()
while True:
# using our observation, choose an action and take it in the environment
action = choose_action(cartpole_model, observation)
next_observation, reward, done, info = env.step(action)
# add to memory
memory.add_to_memory(observation, action, reward)
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append(total_reward)
# initiate training - remember we don't know anything about how the
# agent is doing until it has crashed!
train_step(cartpole_model, optimizer,
observations=np.vstack(memory.observations),
actions=np.array(memory.actions),
discounted_rewards = discount_rewards(memory.rewards))
# reset the memory
memory.clear()
break
# update our observatons
observation = next_observation
###Output
_____no_output_____
###Markdown
To get a sense of how our agent did, we can save a video of the trained model working on balancing the pole. Realize that this is a brand new environment that the agent has not seen before!Let's display the saved video to watch how our agent did!
###Code
saved_cartpole = mdl.lab3.save_video_of_model(cartpole_model, "CartPole-v0")
mdl.lab3.play_video(saved_cartpole)
###Output
Successfully saved 136 frames into CartPole-v0.mp4!
###Markdown
How does the agent perform? Could you train it for shorter amounts of time and still perform well? Do you think that training longer would help even more? Part 2: PongIn Cartpole, we dealt with an environment that was static -- in other words, it didn't change over time. What happens if our environment is dynamic and unpredictable? Well that's exactly the case in [Pong](https://en.wikipedia.org/wiki/Pong), since part of the environment is the opposing player. We don't know how our opponent will act or react to our actions, so the complexity of our problem increases. It also becomes much more interesting, since we can compete to beat our opponent. RL provides a powerful framework for training AI systems with the ability to handle and interact with dynamic, unpredictable environments. In this part of the lab, we'll use the tools and workflow we explored in Part 1 to build an RL agent capable of playing the game of Pong. 3.6 Define and inspect the Pong environmentAs with Cartpole, we'll instantiate the Pong environment in the OpenAI gym, using a seed of 1.
###Code
env = gym.make("Pong-v0", frameskip=5)
env.seed(1); # for reproducibility
###Output
_____no_output_____
###Markdown
Let's next consider the observation space for the Pong environment. Instead of four physical descriptors of the cart-pole setup, in the case of Pong our observations are the individual video frames (i.e., images) that depict the state of the board. Thus, the observations are 210x160 RGB images (arrays of shape (210,160,3)).We can again confirm the size of the observation space by query:
###Code
print("Environment has observation space =", env.observation_space)
###Output
Environment has observation space = Box(210, 160, 3)
###Markdown
In Pong, at every time step, the agent (which controls the paddle) has six actions to choose from: no-op (no operation), move right, move left, fire, fire right, and fire left. Let's confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
Number of possible actions that the agent can choose from = 6
###Markdown
3.7 Define the Pong agentAs before, we'll use a neural network to define our agent. What network architecture do you think would be especially well suited to this game? Since our observations are now in the form of images, we'll add convolutional layers to the network to increase the learning capacity of our network.
###Code
### Define the Pong agent ###
# Functionally define layers for convenience
# All convolutional layers will have ReLu activation
Conv2D = functools.partial(tf.keras.layers.Conv2D, padding='same', activation='relu')
Flatten = tf.keras.layers.Flatten
Dense = tf.keras.layers.Dense
# Defines a CNN for the Pong agent
def create_pong_model():
model = tf.keras.models.Sequential([
# Convolutional layers
# First, 16 7x7 filters with 4x4 stride
Conv2D(filters=16, kernel_size=7, strides=4),
# TODO: define convolutional layers with 32 5x5 filters and 2x2 stride
Conv2D(filters=32, kernel_size=5, strides=2),
# TODO: define convolutional layers with 48 3x3 filters and 2x2 stride
Conv2D(filters=48, kernel_size=3, strides=2),
Flatten(),
# Fully connected layer and output
Dense(units=64, activation='relu'),
# TODO: define the output dimension of the last Dense layer.
# Pay attention to the space the agent needs to act in
Dense(units=n_actions,activation=None)
])
return model
pong_model = create_pong_model()
###Output
_____no_output_____
###Markdown
Since we've already defined the action function, `choose_action(model, observation)`, we don't need to define it again. Instead, we'll be able to reuse it later on by passing in our new model we've just created, `pong_model`. This is awesome because our action function provides a modular and generalizable method for all sorts of RL agents! 3.8 Pong-specific functionsIn Part 1 (Cartpole), we implemented some key functions and classes to build and train our RL agent -- `choose_action(model, observation)` and the `Memory` class, for example. However, in getting ready to apply these to a new game like Pong, we might need to make some slight modifications. Namely, we need to think about what happens when a game ends. In Pong, we know a game has ended if the reward is +1 (we won!) or -1 (we lost unfortunately). Otherwise, we expect the reward at a timestep to be zero -- the players (or agents) are just playing eachother. So, after a game ends, we will need to reset the reward to zero when a game ends. This will result in a modified reward function.
###Code
### Pong reward function ###
# Compute normalized, discounted rewards for Pong (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor. Note increase to 0.99 -- rate of depreciation will be slower.
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.99):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# NEW: Reset the sum if the reward is not 0 (the game has ended!)
if rewards[t] != 0:
R = 0
# update the total discounted reward as before
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
Additionally, we have to consider the nature of the observations in the Pong environment, and how they will be fed into our network. Our observations in this case are images. Before we input an image into our network, we'll do a bit of pre-processing to crop and scale, clean up the background colors to a single color, and set the important game elements to a single color. Let's use this function to visualize what an observation might look like before and after pre-processing.
###Code
observation = env.reset()
for i in range(30):
observation, _,_,_ = env.step(0)
observation_pp = mdl.lab3.preprocess_pong(observation)
f = plt.figure(figsize=(10,3))
ax = f.add_subplot(121)
ax2 = f.add_subplot(122)
ax.imshow(observation); ax.grid(False);
ax2.imshow(np.squeeze(observation_pp)); ax2.grid(False); plt.title('Preprocessed Observation');
###Output
_____no_output_____
###Markdown
What do you notice? How might these changes be important for training our RL algorithm? 3.9 Training PongWe're now all set up to start training our RL algorithm and agent for the game of Pong! We've already defined our loss function with `compute_loss`, which employs policy gradient learning, as well as our backpropagation step with `train_step` which is beautiful! We will use these functions to execute training the Pong agent. Let's walk through the training block.In Pong, rather than feeding our network one image at a time, it can actually improve performance to input the difference between two consecutive observations, which really gives us information about the movement between frames -- how the game is changing. We'll first pre-process the raw observation, `x`, and then we'll compute the difference with the image frame we saw one timestep before. This observation change will be forward propagated through our Pong agent, the CNN network model, which will then predict the next action to take based on this observation. The raw reward will be computed, and the observation, action, and reward will be recorded into memory. This will continue until a training episode, i.e., a game, ends.Then, we will compute the discounted rewards, and use this information to execute a training step. Memory will be cleared, and we will do it all over again!Let's run the code block to train our Pong agent. Note that completing training will take quite a bit of time (estimated at least a couple of hours). We will again visualize the evolution of the total reward as a function of training to get a sense of how the agent is learning.
###Code
### Training Pong ###
# Hyperparameters
learning_rate=1e-4
MAX_ITERS = 100 # increase the maximum number of episodes, since Pong is more complex!
# Model and optimizer
pong_model = create_pong_model()
optimizer = tf.keras.optimizers.Adam(learning_rate)
# plotting
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=5, xlabel='Iterations', ylabel='Rewards')
memory = Memory()
for i_episode in range(MAX_ITERS):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
previous_frame = mdl.lab3.preprocess_pong(observation)
while True:
# Pre-process image
current_frame = mdl.lab3.preprocess_pong(observation)
'''TODO: determine the observation change
Hint: this is the difference between the past two frames'''
obs_change = current_frame-previous_frame
'''TODO: choose an action for the pong model, using the frame difference, and evaluate'''
action = choose_action(pong_model,obs_change)
# Take the chosen action
next_observation, reward, done, info = env.step(action)
'''TODO: save the observed frame difference, the action that was taken, and the resulting reward!'''
memory.add_to_memory(obs_change, action, reward)
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append( total_reward )
# begin training
train_step(pong_model,
optimizer,
observations = np.stack(memory.observations, 0),
actions = np.array(memory.actions),
discounted_rewards = discount_rewards(memory.rewards))
memory.clear()
break
observation = next_observation
previous_frame = current_frame
###Output
_____no_output_____
###Markdown
Finally we can put our trained agent to the test! It will play in a newly instantiated Pong environment against the "computer", a base AI system for Pong. Your agent plays as the green paddle. Let's watch the match instant replay!
###Code
saved_pong = mdl.lab3.save_video_of_model(
pong_model, "Pong-v0", obs_diff=True,
pp_fn=mdl.lab3.preprocess_pong)
mdl.lab3.play_video(saved_pong)
###Output
Successfully saved 1011 frames into Pong-v0.mp4!
|
4-assets/BOOKS/Jupyter-Notebooks/Overflow/23_Laplace.ipynb | ###Markdown
Physics 256 Solving Laplace's Equation\begin{eqnarray}\nabla \cdot \vec{E} &= \frac{\rho}{\varepsilon_0} \quad&\quad \nabla \times \vec{E} &= -\frac{\partial\vec{B}}{\partial t} \newline\nabla \cdot \vec{B} &= 0 \quad&\quad \nabla \times \vec{B} &= \mu_0 \left(\vec{J} +\varepsilon_0 \frac{\partial\vec{E}}{\partial t}\right)\end{eqnarray}
###Code
import style
style._set_css_style('../include/bootstrap.css')
###Output
_____no_output_____
###Markdown
Last Time [Notebook Link: 22 FFT](./22_FFT.ipynb)- Fast Fourier transform- Using the FFT to analyze chaos Today- Elliptical differential equations- Spatial discretization and the Laplace equation Setting up the Notebook
###Code
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
plt.style.use('../include/notebook.mplstyle');
%config InlineBackend.figure_format = 'svg'
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
###Output
_____no_output_____
###Markdown
Laplace's EquationIn the absence of any charge density ($\rho=0$) the scalar electric potential is related to the electric field via:\begin{equation}\vec{E} = -\nabla V\end{equation}and thus the first of Maxwell's equation is the Laplace equation for the scalar potential:\begin{equation}\nabla^2 V = 0.\end{equation}In three spatial dimensions this has the form of an *elliptic* partial differential equation:\begin{equation}\frac{\partial^2}{\partial x^2} V(x,y,z) + \frac{\partial^2}{\partial y^2} V(x,y,z) + \frac{\partial^2}{\partial z^2} V(x,y,z) = 0.\end{equation}This equation is very different than the ordinary differential equations (ODE) we have solved thus far in this class. Here, we have a 2nd order partial differential equation (PDE) where we know the *boundary conditions*. Unlike for the case of ODEs, there is no single class of systematic integrators which differ only by their accuracy. Instead, we have to determine the best algorithm on a case-by-case basis. For elliptical PDEs, we will study **relaxation** methods which work well. Discretization In analogy to our approach for ODEs (where we discretized time) we will discretize space by writing:\begin{align}x_i &= i \Delta x \newliney_i &= i \Delta y \newlinez_i &= i \Delta z \end{align}where $\Delta x, \Delta y, \Delta z \ll 1$ and we define:\begin{equation}V(x_i,y_j,z_k) = V_{ijk} = V(i,j,k). \end{equation}Our first step is to write the Laplace equation as a *difference* equation. Forward DerivativeWe have already used this discrete approximation to the deriviative derived from the Taylor expansion near $x_i$:\begin{equation}\frac{\partial V}{\partial x} \approx \frac{V(i+1,j,k) - V(i,j,k)}{\Delta x}.\end{equation} Backwards DerivativeWe could also have expanded in the oppositie direction which would give:\begin{equation}\frac{\partial V}{\partial x} \approx \frac{V(i,j,k) - V(i-1,j,k)}{\Delta x}.\end{equation} Centered DerivativeLet's combine these two approaches. Consider the Taylor expansion for a function $f$ of a single variable $x$:\begin{align}f(x+\Delta x) &= f(x) + f'(x) \Delta x + \frac{1}{2} f''(x) (\Delta x)^2 + \frac{1}{6} f'''(x)(\Delta x)^3 + \cdots \newlinef(x-\Delta x) &= f(x) - f'(x) \Delta x + \frac{1}{2} f''(x) (\Delta x)^2 - \frac{1}{6} f'''(x)(\Delta x)^3 + \cdots \end{align}Subtracting these two expressions yields:\begin{align}f(x+\Delta x) - f(x-\Delta x) &= 2 f'(x) \Delta x + \frac{1}{3} f'''(x) (\Delta x)^3 \newline \Rightarrow f'(x) &= \frac{d f}{dx} = \frac{f(x+\Delta x) - f(x-\Delta x)}{2 \Delta x} + \mathrm{O}(\Delta x^2) .\end{align}This is the centered derivative and it is accurate to order $\Delta x^2$ as opposed to order $\Delta x$ for the forward and backward derivatives. 2nd DerivativeIf we added instead of subtracting we would have found:\begin{align}f(x+\Delta x) + f(x-\Delta x) &= 2 f(x) + f''(x) (\Delta x)^2 \newline \Rightarrow f''(x) &= \frac{d^2 f}{dx^2} = \frac{f(x+\Delta x) + f(x-\Delta x) - 2f(x)}{(\Delta x)^2} + \mathrm{O}(\Delta x^2) .\end{align}We can think of this as the combination of a forward and backward derivative at step $\Delta x/2$. Programming challenge Consider the function $f(x) = \ln x$. Compare the forward and centered derivative of $f(x)$ on $x \in [2,3]$ with the exact result using $\Delta x = 0.1$. Compare the 2nd derivative of $f(x)$ on $x \in [2,3]$ with the exact result.
###Code
def f(x):
return np.log(x)
def df(f,x):
'''Compute the forward, centered and 2nd derivative of f = ln(x)'''
Δx = x[1]-x[0]
dff = (f(x+Δx)-f(x))/Δx
dcf = (f(x+Δx)-f(x-Δx))/(2*Δx)
d2f = (f(x+Δx)+f(x-Δx)-2*f(x))/(Δx**2)
return dff,dcf,d2f
N = 10
x = np.linspace(2,3,N)
dff,dcf,d2f = df(f,x)
fig, axes = plt.subplots(1,2,sharex=True, sharey=False, squeeze=True, figsize=(9,4))
fig.subplots_adjust(wspace=0.5)
axes[0].plot(x,1/x, lw=1.5, label=r'$1/x$')
axes[0].plot(x,dff,'s', mfc='None', ms=5, label='Forward Deriv.')
axes[0].plot(x,dcf,'o', mfc='None', ms=5, label='Centered Deriv.')
axes[0].set_ylabel("f'(x)")
axes[0].set_xlabel('x')
axes[0].legend(handlelength=1)
axes[0].set_xlim(2,3)
axes[1].plot(x,-1/(x*x), lw=1.5, label=r'$-1/x^2$')
axes[1].plot(x,d2f,'o', mfc='None', ms=5, label='2nd Centered Deriv.')
axes[1].set_xlabel('x')
axes[1].set_ylabel("f''(x)")
axes[1].legend(loc='lower right', handlelength=1)
###Output
_____no_output_____ |
notebooks/community/sdk/sdk_automl_image_object_detection_batch.ipynb | ###Markdown
Vertex SDK: AutoML training image object detection model for batch prediction Run in Colab View on GitHub Open in Vertex AI Workbench OverviewThis tutorial demonstrates how to use the Vertex SDK to create image object detection models and do batch prediction using a Google Cloud [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users) model. DatasetThe dataset used for this tutorial is the Salads category of the [OpenImages dataset](https://www.tensorflow.org/datasets/catalog/open_images_v4) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese. ObjectiveIn this tutorial, you create an AutoML image object detection model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Make a batch prediction.There is one key difference between using batch prediction and using online prediction:* Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.* Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. CostsThis tutorial uses billable components of Google Cloud:* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environmentIf you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.Otherwise, make sure your environment meets this notebook's requirements. You need the following:- The Cloud Storage SDK- Git- Python 3- virtualenv- Jupyter notebook running in a virtual environment with Python 3The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/).2. [Install Python 3](https://cloud.google.com/python/setupinstalling_python).3. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.4. To install Jupyter, run `pip3 install jupyter` on the command-line in a terminal shell.5. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.6. Open this notebook in the Jupyter Notebook Dashboard. InstallationInstall the latest version of Vertex SDK for Python.
###Code
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtimeThis tutorial does not require a GPU runtime. Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com)4. If you are running this notebook locally, you will need to install the [Cloud SDK]((https://cloud.google.com/sdk)).5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebooks**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Initialize Vertex SDK for PythonInitialize the Vertex SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own AutoML image object detection model. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
###Code
IMPORT_FILE = "gs://cloud-samples-data/vision/salads.csv"
###Output
_____no_output_____
###Markdown
Quick peek at your dataThis tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.
###Code
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
###Output
_____no_output_____
###Markdown
Create the DatasetNext, create the `Dataset` resource using the `create` method for the `ImageDataset` class, which takes the following parameters:- `display_name`: The human readable name for the `Dataset` resource.- `gcs_source`: A list of one or more dataset index files to import the data items into the `Dataset` resource.- `import_schema_uri`: The data labeling schema for the data items.This operation may take several minutes.
###Code
dataset = aip.ImageDataset.create(
display_name="Salads" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.image.bounding_box,
)
print(dataset.resource_name)
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLImageTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `prediction_type`: The type task to train the model for. - `classification`: An image classification model. - `object_detection`: An image object detection model.- `multi_label`: If a classification task, whether single (`False`) or multi-labeled (`True`).- `model_type`: The type of model for deployment. - `CLOUD`: Deployment on Google Cloud - `CLOUD_HIGH_ACCURACY_1`: Optimized for accuracy over latency for deployment on Google Cloud. - `CLOUD_LOW_LATENCY_`: Optimized for latency over accuracy for deployment on Google Cloud. - `MOBILE_TF_VERSATILE_1`: Deployment on an edge device. - `MOBILE_TF_HIGH_ACCURACY_1`:Optimized for accuracy over latency for deployment on an edge device. - `MOBILE_TF_LOW_LATENCY_1`: Optimized for latency over accuracy for deployment on an edge device.- `base_model`: (optional) Transfer learning from existing `Model` resource -- supported for image classification only.The instantiated object is the DAG (directed acyclic graph) for the training job.
###Code
dag = aip.AutoMLImageTrainingJob(
display_name="salads_" + TIMESTAMP,
prediction_type="object_detection",
multi_label=False,
model_type="CLOUD",
base_model=None,
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 60 minutes.
###Code
model = dag.run(
dataset=dataset,
model_display_name="salads_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=20000,
disable_early_stopping=False,
)
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
###Code
# Get model resource ID
models = aip.Model.list(filter="display_name=salads_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
###Output
_____no_output_____
###Markdown
Send a batch prediction requestSend a batch prediction to your deployed model. Get test item(s)Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
###Code
test_items = !gsutil cat $IMPORT_FILE | head -n2
cols_1 = str(test_items[0]).split(",")
cols_2 = str(test_items[1]).split(",")
if len(cols_1) == 11:
test_item_1 = str(cols_1[1])
test_label_1 = str(cols_1[2])
test_item_2 = str(cols_2[1])
test_label_2 = str(cols_2[2])
else:
test_item_1 = str(cols_1[0])
test_label_1 = str(cols_1[1])
test_item_2 = str(cols_2[0])
test_label_2 = str(cols_2[1])
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
###Output
_____no_output_____
###Markdown
Copy test item(s)For the batch prediction, copy the test items over to your Cloud Storage bucket.
###Code
file_1 = test_item_1.split("/")[-1]
file_2 = test_item_2.split("/")[-1]
! gsutil cp $test_item_1 $BUCKET_NAME/$file_1
! gsutil cp $test_item_2 $BUCKET_NAME/$file_2
test_item_1 = BUCKET_NAME + "/" + file_1
test_item_2 = BUCKET_NAME + "/" + file_2
###Output
_____no_output_____
###Markdown
Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:- `content`: The Cloud Storage path to the image.- `mime_type`: The content type. In our example, it is a `jpeg` file.For example: {'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'}
###Code
import json
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": test_item_1, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
data = {"content": test_item_2, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
###Output
_____no_output_____
###Markdown
Make the batch prediction requestNow that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:- `job_display_name`: The human readable name for the batch prediction job.- `gcs_source`: A list of one or more batch request input files.- `gcs_destination_prefix`: The Cloud Storage location for storing the batch prediction resuls.- `sync`: If set to True, the call will block while waiting for the asynchronous batch job to complete.
###Code
batch_predict_job = model.batch_predict(
job_display_name="salads_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False,
)
print(batch_predict_job)
###Output
_____no_output_____
###Markdown
Wait for completion of batch prediction jobNext, wait for the batch job to complete. Alternatively, one can set the parameter `sync` to `True` in the `batch_predict()` method to block until the batch prediction job is completed.
###Code
batch_predict_job.wait()
###Output
_____no_output_____
###Markdown
Get the predictionsNext, get the results from the completed batch prediction job.The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:- `content`: The prediction request.- `prediction`: The prediction response. - `ids`: The internal assigned unique identifiers for each prediction request. - `displayNames`: The class names for each class label. - `bboxes`: The bounding box of each detected object.
###Code
import json
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Vertex SDK: AutoML training image object detection model for batch prediction Run in Colab View on GitHub Open in Google Cloud Notebooks OverviewThis tutorial demonstrates how to use the Vertex SDK to create image object detection models and do batch prediction using a Google Cloud [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users) model. DatasetThe dataset used for this tutorial is the Salads category of the [OpenImages dataset](https://www.tensorflow.org/datasets/catalog/open_images_v4) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese. ObjectiveIn this tutorial, you create an AutoML image object detection model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Make a batch prediction.There is one key difference between using batch prediction and using online prediction:* Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.* Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. CostsThis tutorial uses billable components of Google Cloud:* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environmentIf you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.Otherwise, make sure your environment meets this notebook's requirements. You need the following:- The Cloud Storage SDK- Git- Python 3- virtualenv- Jupyter notebook running in a virtual environment with Python 3The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/).2. [Install Python 3](https://cloud.google.com/python/setupinstalling_python).3. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.4. To install Jupyter, run `pip3 install jupyter` on the command-line in a terminal shell.5. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.6. Open this notebook in the Jupyter Notebook Dashboard. InstallationInstall the latest version of Vertex SDK for Python.
###Code
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtimeThis tutorial does not require a GPU runtime. Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com)4. If you are running this notebook locally, you will need to install the [Cloud SDK]((https://cloud.google.com/sdk)).5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebooks**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Initialize Vertex SDK for PythonInitialize the Vertex SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own AutoML image object detection model. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
###Code
IMPORT_FILE = "gs://cloud-samples-data/vision/salads.csv"
###Output
_____no_output_____
###Markdown
Quick peek at your dataThis tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.
###Code
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
###Output
_____no_output_____
###Markdown
Create the DatasetNext, create the `Dataset` resource using the `create` method for the `ImageDataset` class, which takes the following parameters:- `display_name`: The human readable name for the `Dataset` resource.- `gcs_source`: A list of one or more dataset index files to import the data items into the `Dataset` resource.- `import_schema_uri`: The data labeling schema for the data items.This operation may take several minutes.
###Code
dataset = aip.ImageDataset.create(
display_name="Salads" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.image.bounding_box,
)
print(dataset.resource_name)
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLImageTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `prediction_type`: The type task to train the model for. - `classification`: An image classification model. - `object_detection`: An image object detection model.- `multi_label`: If a classification task, whether single (`False`) or multi-labeled (`True`).- `model_type`: The type of model for deployment. - `CLOUD`: Deployment on Google Cloud - `CLOUD_HIGH_ACCURACY_1`: Optimized for accuracy over latency for deployment on Google Cloud. - `CLOUD_LOW_LATENCY_`: Optimized for latency over accuracy for deployment on Google Cloud. - `MOBILE_TF_VERSATILE_1`: Deployment on an edge device. - `MOBILE_TF_HIGH_ACCURACY_1`:Optimized for accuracy over latency for deployment on an edge device. - `MOBILE_TF_LOW_LATENCY_1`: Optimized for latency over accuracy for deployment on an edge device.- `base_model`: (optional) Transfer learning from existing `Model` resource -- supported for image classification only.The instantiated object is the DAG (directed acyclic graph) for the training job.
###Code
dag = aip.AutoMLImageTrainingJob(
display_name="salads_" + TIMESTAMP,
prediction_type="object_detection",
multi_label=False,
model_type="CLOUD",
base_model=None,
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 60 minutes.
###Code
model = dag.run(
dataset=dataset,
model_display_name="salads_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=20000,
disable_early_stopping=False,
)
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
###Code
# Get model resource ID
models = aip.Model.list(filter="display_name=salads_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
###Output
_____no_output_____
###Markdown
Send a batch prediction requestSend a batch prediction to your deployed model. Get test item(s)Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
###Code
test_items = !gsutil cat $IMPORT_FILE | head -n2
cols_1 = str(test_items[0]).split(",")
cols_2 = str(test_items[1]).split(",")
if len(cols_1) == 11:
test_item_1 = str(cols_1[1])
test_label_1 = str(cols_1[2])
test_item_2 = str(cols_2[1])
test_label_2 = str(cols_2[2])
else:
test_item_1 = str(cols_1[0])
test_label_1 = str(cols_1[1])
test_item_2 = str(cols_2[0])
test_label_2 = str(cols_2[1])
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
###Output
_____no_output_____
###Markdown
Copy test item(s)For the batch prediction, copy the test items over to your Cloud Storage bucket.
###Code
file_1 = test_item_1.split("/")[-1]
file_2 = test_item_2.split("/")[-1]
! gsutil cp $test_item_1 $BUCKET_NAME/$file_1
! gsutil cp $test_item_2 $BUCKET_NAME/$file_2
test_item_1 = BUCKET_NAME + "/" + file_1
test_item_2 = BUCKET_NAME + "/" + file_2
###Output
_____no_output_____
###Markdown
Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:- `content`: The Cloud Storage path to the image.- `mime_type`: The content type. In our example, it is a `jpeg` file.For example: {'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'}
###Code
import json
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": test_item_1, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
data = {"content": test_item_2, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
###Output
_____no_output_____
###Markdown
Make the batch prediction requestNow that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:- `job_display_name`: The human readable name for the batch prediction job.- `gcs_source`: A list of one or more batch request input files.- `gcs_destination_prefix`: The Cloud Storage location for storing the batch prediction resuls.- `sync`: If set to True, the call will block while waiting for the asynchronous batch job to complete.
###Code
batch_predict_job = model.batch_predict(
job_display_name="salads_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False,
)
print(batch_predict_job)
###Output
_____no_output_____
###Markdown
Wait for completion of batch prediction jobNext, wait for the batch job to complete. Alternatively, one can set the parameter `sync` to `True` in the `batch_predict()` method to block until the batch prediction job is completed.
###Code
batch_predict_job.wait()
###Output
_____no_output_____
###Markdown
Get the predictionsNext, get the results from the completed batch prediction job.The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:- `content`: The prediction request.- `prediction`: The prediction response. - `ids`: The internal assigned unique identifiers for each prediction request. - `displayNames`: The class names for each class label. - `bboxes`: The bounding box of each detected object.
###Code
import json
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Vertex SDK: AutoML training image object detection model for batch prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex SDK to create image object detection models and do batch prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users). DatasetThe dataset used for this tutorial is the Salads category of the [OpenImages dataset](https://www.tensorflow.org/datasets/catalog/open_images_v4) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese. ObjectiveIn this tutorial, you create an AutoML image object detection model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Make a batch prediction.There is one key difference between using batch prediction and using online prediction:* Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.* Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. CostsThis tutorial uses billable components of Google Cloud (GCP):* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of Vertex SDK.
###Code
import sys
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = '--user'
else:
USER_FLAG = ''
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the Vertex SDK and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
###Code
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
###Code
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/ai-platform-unified/docs/general/locations)
###Code
REGION = 'us-central1' #@param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" #@param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Initialize Vertex SDKInitialize the Vertex SDK for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own AutoML image object detection model. Create a Dataset ResourceFirst, you create an image Dataset resource for the Salads dataset. Data preparationThe Vertex `Dataset` resource for images has some requirements for your data:- Images must be stored in a Cloud Storage bucket.- Each image file must be in an image format (PNG, JPEG, BMP, ...).- There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image.- The index file must be either CSV or JSONL. CSVFor image object detection, the CSV index file has the requirements:- No heading.- First column is the Cloud Storage path to the image.- Second column is the label.- Third/Fourth columns are the upper left corner of bounding box. Coordinates are normalized, between 0 and 1.- Fifth/Sixth/Seventh columns are not used and should be 0.- Eighth/Ninth columns are the lower right corner of the bounding box. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
###Code
IMPORT_FILE = 'gs://cloud-samples-data/vision/salads.csv'
###Output
_____no_output_____
###Markdown
Quick peek at your dataYou will use a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.
###Code
if 'IMPORT_FILES' in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
###Output
_____no_output_____
###Markdown
Create the DatasetNext, create the `Dataset` resource using the `create()` method for the `ImageDataset` class, which takes the following parameters:- `display_name`: The human readable name for the `Dataset` resource.- `gcs_source`: A list of one or more dataset index file to import the data items into the `Dataset` resource.- `import_schema_uri`: The data labeling schema for the data items.This operation may take several minutes.
###Code
dataset = aip.ImageDataset.create(
display_name="Salads" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.image.bounding_box,
)
print(dataset.resource_name)
###Output
_____no_output_____
###Markdown
Train the modelNow train an AutoML image object detection model using your Vertex `Dataset` resource. To train the model, do the following steps:1. Create an Vertex training pipeline for the `Dataset` resource.2. Execute the pipeline to start the training. Create and run training pipelineTo train an AutoML image object detection model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLImageTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `prediction_type`: The type task to train the model for. - `classification`: An image classification model. - `object_detection`: An image object detection model.- `multi_label`: If a classification task, whether single (`False`) or multi-labeled (`True`).- `model_type`: The type of model for deployment. - `CLOUD`: Deployment on Google Cloud - `CLOUD_HIGH_ACCURACY_1`: Optimized for accuracy over latency for deployment on Google Cloud. - `CLOUD_LOW_LATENCY_`: Optimized for latency over accuracy for deployment on Google Cloud. - `MOBILE_TF_VERSATILE_1`: Deployment on an edge device. - `MOBILE_TF_HIGH_ACCURACY_1`:Optimized for accuracy over latency for deployment on an edge device. - `MOBILE_TF_LOW_LATENCY_1`: Optimized for latency over accuracy for deployment on an edge device.- `base_model`: (optional) Transfer learning from existing `Model` resource -- supported for image classification only.The instantiated object is the DAG for the training job.
###Code
dag = aip.AutoMLImageTrainingJob(
display_name="salads_" + TIMESTAMP,
prediction_type="object_detection",
model_type="CLOUD",
base_model=None,
)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run()`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `validation_fraction_split`: The percentage of the dataset to use for validation.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 20 minutes.
###Code
model = dag.run(
dataset=dataset,
model_display_name="salads_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=20000,
disable_early_stopping=False
)
###Output
_____no_output_____
###Markdown
Model deployment for batch predictionNow deploy the trained Vertex `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for online prediction.For online prediction, you:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource.3. Make online prediction requests to the `Endpoint` resource.For batch-prediction, you:1. Create a batch prediction job.2. The job service will provision resources for the batch prediction request.3. The results of the batch prediction request are returned to the caller.4. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction requestNow do a batch prediction to your deployed model. Get test item(s)Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
###Code
test_items = !gsutil cat $IMPORT_FILE | head -n2
cols_1 = str(test_items[0]).split(',')
cols_2 = str(test_items[1]).split(',')
if len(cols_1) == 11:
test_item_1 = str(cols_1[1])
test_label_1 = str(cols_1[2])
test_item_2 = str(cols_2[1])
test_label_2 = str(cols_2[2])
else:
test_item_1 = str(cols_1[0])
test_label_1 = str(cols_1[1])
test_item_2 = str(cols_2[0])
test_label_2 = str(cols_2[1])
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
###Output
_____no_output_____
###Markdown
Copy test item(s)For the batch prediction, you will copy the test items over to your Cloud Storage bucket.
###Code
file_1 = test_item_1.split('/')[-1]
file_2 = test_item_2.split('/')[-1]
! gsutil cp $test_item_1 $BUCKET_NAME/$file_1
! gsutil cp $test_item_2 $BUCKET_NAME/$file_2
test_item_1 = BUCKET_NAME + "/" + file_1
test_item_2 = BUCKET_NAME + "/" + file_2
###Output
_____no_output_____
###Markdown
Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:- `content`: The Cloud Storage path to the image.- `mime_type`: The content type. In our example, it is an `jpeg` file.For example: {'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'}
###Code
import tensorflow as tf
import json
gcs_input_uri = BUCKET_NAME + '/test.jsonl'
with tf.io.gfile.GFile(gcs_input_uri, 'w') as f:
data = {"content": test_item_1, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + '\n')
data = {"content": test_item_2, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + '\n')
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
###Output
_____no_output_____
###Markdown
Make the batch prediction requestNow that your `Model` resource is trained, you can make a batch prediction by invoking the `batch_request()` method, with the following parameters:- `job_display_name`: The human readable name for the batch prediction job.- `gcs_source`: A list of one or more batch request input files.- `gcs_destination_prefix`: The Cloud Storage location for storing the batch prediction resuls.- `sync`: If set to `True`, the call will block while waiting for the asynchronous batch job to complete.
###Code
batch_predict_job = model.batch_predict(
job_display_name="$(DATASET_ALIAS)_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False
)
print(batch_predict_job)
###Output
_____no_output_____
###Markdown
Wait for completion of batch prediction jobNext, wait for the batch job to complete.
###Code
batch_predict_job.wait()
###Output
_____no_output_____
###Markdown
Get the predictionsNext, get the results from the completed batch prediction job.The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method `iter_outputs()` to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:- `content`: The prediction request.- `prediction`: The prediction response. - `ids`: The internal assigned unique identifiers for each prediction request. - `displayNames`: The class names for each class label. - `confidences`: The predicted confidence of each object, between 0 and 1, per class label. - `bboxes`: The bounding box for each object
###Code
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex dataset object
try:
if delete_dataset and 'dataset' in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if delete_model and 'model' in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if delete_endpoint and 'model' in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if delete_batchjob and 'model' in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
if delete_bucket and 'BUCKET_NAME' in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____ |
notebooks/covid19_growth.ipynb | ###Markdown
COVID-19 Growth Analysis> Visualizations of the growth of COVID-19.- comments: true- author: Thomas Wiecki- categories: [growth]- image: images/covid-growth.png- permalink: /growth-analysis/
###Code
#hide
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import pandas as pd
import seaborn as sns
import requests
import io
sns.set_context('talk')
plt.style.use('seaborn-whitegrid')
#hide
def load_timeseries(name,
base_url='https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series'):
# Thanks to kasparthommen for the suggestion to directly download
url = f'{base_url}/time_series_19-covid-{name}.csv'
csv = requests.get(url).text
df = pd.read_csv(io.StringIO(csv),
index_col=['Country/Region', 'Province/State', 'Lat', 'Long'])
df['type'] = name.lower()
df.columns.name = 'date'
df = (df.set_index('type', append=True)
.reset_index(['Lat', 'Long'], drop=True)
.stack()
.reset_index()
.set_index('date')
)
df.index = pd.to_datetime(df.index)
df.columns = ['country', 'state', 'type', 'cases']
# Move HK to country level
df.loc[df.state =='Hong Kong', 'country'] = 'Hong Kong'
df.loc[df.state =='Hong Kong', 'state'] = np.nan
# Aggregate large countries split by states
df = pd.concat([df,
(df.loc[~df.state.isna()]
.groupby(['country', 'date', 'type'])
.sum()
.rename(index=lambda x: x+' (total)', level=0)
.reset_index(level=['country', 'type']))
])
return df
df_confirmed = load_timeseries('Confirmed')
# Estimated critical cases
p_crit = .05
df_confirmed = df_confirmed.assign(cases_crit=df_confirmed.cases*p_crit)
# Compute days relative to when 100 confirmed cases was crossed
df_confirmed.loc[:, 'days_since_100'] = np.nan
for country in df_confirmed.country.unique():
df_confirmed.loc[(df_confirmed.country == country), 'days_since_100'] = \
np.arange(-len(df_confirmed.loc[(df_confirmed.country == country) & (df_confirmed.cases < 100)]),
len(df_confirmed.loc[(df_confirmed.country == country) & (df_confirmed.cases >= 100)]))
annotate_kwargs = dict(
s='Based on COVID Data Repository by Johns Hopkins CSSE ({})\nBy Thomas Wiecki'.format(df_confirmed.index.max().strftime('%B %d, %Y')),
xy=(0.05, 0.01), xycoords='figure fraction', fontsize=10)
#hide
# Country names seem to change quite a bit
df_confirmed.country.unique()
#hide
european_countries = ['Italy', 'Germany', 'France (total)', 'Spain', 'United Kingdom (total)']#,
#'Iran']
large_engl_countries = ['Australia (total)','US (total)', 'Canada (total)', 'Australia (total)']
asian_countries = ['Singapore', 'Japan', 'Korea, South', 'Hong Kong']
south_american_countries = ['Argentina', 'Brazil', 'Colombia', 'Chile']
african_countries = ['Ghana','Kenya','Rwanda' ]
south_africa = ['South Africa']
country_groups = [european_countries, asian_countries, south_american_countries, south_africa]
line_styles = ['-', ':', '--', '-.', '-']
#collapse-hide
def plot_countries(df, countries, min_cases=100, ls='-', col='cases'):
for country in countries:
df_country = df.loc[(df.country == country) & (df.cases >= min_cases)]
if len(df_country) == 0:
continue
df_country.reset_index()[col].plot(label=country, ls=ls)
sns.set_palette(sns.hls_palette(8, l=.45, s=.8)) # 8 countries max
fig, ax = plt.subplots(figsize=(12, 8))
for countries, ls in zip(country_groups, line_styles):
plot_countries(df_confirmed, countries, ls=ls)
x = np.linspace(0, plt.xlim()[1] - 1)
ax.plot(x, 100 * (1.33) ** x, ls='--', color='k', label='33% daily growth')
ax.set(yscale='log',
title='Exponential growth of COVID-19 across countries',
xlabel='Days from first 100 confirmed cases',
ylabel='Confirmed cases (log scale)')
ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.legend(bbox_to_anchor=(1.0, 1.0))
ax.annotate(**annotate_kwargs)
sns.despine();
#hide
# This creates a preview image for the blog post and home page
fig.savefig('../images/covid-growth.png')
#collapse-hide
fig, ax = plt.subplots(figsize=(12, 8))
for countries, ls in zip(country_groups, line_styles):
plot_countries(df_confirmed, countries, ls=ls)
x = np.linspace(0, plt.xlim()[1] - 1)
ax.plot(x, 100 * (1.33) ** x, ls='--', color='k', label='33% daily growth')
ax.set(title='Exponential growth of COVID-19 across countries',
xlabel='Days from first 100 confirmed cases',
ylabel='Confirmed cases', ylim=(0, 30000))
ax.legend(bbox_to_anchor=(1.0, 1.0))
ax.annotate(**annotate_kwargs)
sns.despine();
#collapse-hide
smooth_days = 4
fig, ax = plt.subplots(figsize=(14, 8))
df_confirmed['pct_change'] = (df_confirmed
.groupby('country')
.cases
.pct_change()
.rolling(smooth_days)
.mean()
)
for countries, ls in zip(country_groups, line_styles):
(df_confirmed.set_index('country')
.loc[countries]
.loc[lambda x: x.cases > 100]
.reset_index()
.set_index('days_since_100')
.groupby('country', sort=False)['pct_change']
.plot(ls=ls)
)
ax.set(ylim=(0, 1),
xlim=(0, 20),
title='Are we seeing changes in daily growth rate?',
xlabel='Days from first 100 confirmed cases',
ylabel='Daily percent change (smoothed over {} days)'.format(smooth_days),
)
ax.axhline(.33, ls='--', color='k')
ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.legend(bbox_to_anchor=(1.0, 1.0))
sns.despine()
ax.annotate(**annotate_kwargs);
#collapse-hide
sns.set_palette(sns.hls_palette(8, l=.45, s=.8)) # 8 countries max
fig, ax = plt.subplots(figsize=(12, 8))
# 28000 ICU beds total, 80% occupied
icu_germany = 28000
icu_germany_free = .2
df_tmp = df_confirmed.loc[lambda x: (x.country == 'Germany') & (x.cases > 100)].cases_crit
df_tmp.plot(ax=ax)
x = np.linspace(0, 30, 30)
pd.Series(index=pd.date_range(df_tmp.index[0], periods=30),
data=100*p_crit * (1.33) ** x).plot(ax=ax,ls='--', color='k', label='33% daily growth')
ax.axhline(icu_germany, color='.3', ls='-.', label='Total ICU beds')
ax.axhline(icu_germany * icu_germany_free, color='.5', ls=':', label='Free ICU beds')
ax.set(yscale='log',
title='When will Germany run out of ICU beds?',
ylabel='Expected critical cases (assuming {:.0f}% critical)'.format(100 * p_crit),
)
ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.legend(bbox_to_anchor=(1.0, 1.0))
sns.despine()
ax.annotate(**annotate_kwargs);
###Output
_____no_output_____
###Markdown
COVID-19 growth analysis(c) 2020, [Thomas Wiecki](https://twitter.com/twiecki)Adapted for South Africa by [Alta de Waal](https://twitter.com/AltadeWaal) This notebook gets up-to-date data from the [Coronavirus COVID-19 (2019-nCoV) Data Repository for South Africa [Hosted by DSFSI group at University of Pretoria]](https://github.com/dsfsi/covid19za) and recreates the (pay-walled) plot in the [Financial Times]( https://www.ft.com/content/a26fbf7e-48f8-11ea-aeb3-955839e06441).
###Code
%matplotlib inline
import datetime
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import pandas as pd
import seaborn as sns
sns.set_context('talk')
plt.style.use('seaborn-whitegrid')
###Output
_____no_output_____
###Markdown
Load data
###Code
def load_timeseries(name):
df = pd.read_csv(name)
df = (df.set_index('date'))
df.index = pd.to_datetime(df.index, dayfirst=True)
return df
df = load_timeseries('data/covid19za_timeline_confirmed.csv')
df.head()
def plot_confirmed(provinces, min_cases=100, ls='-'):
for province in provinces:
df1 = df.loc[(df.province == province)].groupby(['date']).agg({'country': ['count']})
df1.columns = ['new cases']
df1['cummulative'] = df1['new cases'].cumsum()
(df1.reset_index()['cummulative']
.plot(label=province, ls=ls))
print('\n' + province +":")
print(df1)
# sns.set_palette(sns.hls_palette(8, l=.45, s=.8)) # 8 countries max
fig, ax = plt.subplots(figsize=(12, 8))
provinces = ['GP', 'WC', 'KZN']
plot_confirmed(provinces, min_cases=1, ls='-')
x = np.linspace(0, plt.xlim()[1])
plt.plot(x,x+(1.33), ls='--', color='k', label='33% daily growth')
#plt.yscale('log');
plt.title('Data up to {}'.format(df.index.max().strftime('%B %d, %Y')))
plt.xlabel('Days from first confirmed case')
plt.ylabel('Confirmed cases')
ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.set_xticks(range(0,int(plt.xlim()[1])+1))
plt.legend(bbox_to_anchor=(1.0, 1.0))
sns.despine()
plt.annotate('Based on Coronavirus COVID-19 (2019-nCoV) Data Repository for South Africa [Hosted by DSFSI group at University of Pretoria]',
(0.1, 0.01), xycoords='figure fraction', fontsize=10)
###Output
GP:
new cases cummulative
date
2020-03-07 1 1
2020-03-11 4 5
2020-03-12 1 6
2020-03-13 6 12
WC:
new cases cummulative
date
2020-03-11 1 1
2020-03-13 2 3
KZN:
new cases cummulative
date
2020-03-05 1 1
2020-03-08 1 2
2020-03-11 1 3
2020-03-12 1 4
|
notebook/Unit2-1-Matplotlib.ipynb | ###Markdown
Plotting Using Matplotlibnumpy ,scipy,matplotlib Often text is the best way to communicate information, but sometimes there is alot of truth to the Chinese proverb,**图片的意义可以表达近万字** >A picture's meaning can express ten thousand words**Matplotlib**http://matplotlib.org/Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shells, the Jupyter notebook, web application servers, and four graphical user interface toolkits.Matplotlib Developers on Github: https://github.com/matplotlibUser's Guide: http://matplotlib.org/users/index.html 1 Matplotlib.pyplot[Matplotlib.pyplot](https://matplotlib.org/2.0.2/api/pyplot_api.html) provides a `MATLAB`-like plotting framework. 1.1 The Simple ExampleLet’s start with a simple example that uses `pyplot.plot` to produce the plot.
###Code
%%file ./code/python/plt111.py
import matplotlib.pyplot as plt
plt.figure() #create figure
plt.plot([1,2,3,4], [1,7,3,5]) #draw on figure 1 <x,y> list/array
plt.show() #show figure on screen
###Output
Overwriting ./code/python/plt111.py
###Markdown
```>python plt111.py``` 
###Code
import matplotlib.pyplot as plt
plt.figure() #create figure 1
x=[1,2,3,4]
y=[1,7,3,5]
plt.plot(x,y) # plot x and y using default line style and color
plt.show() #show figure on screen
###Output
_____no_output_____
###Markdown
1.2 The Basic Method of PyPlot* pyplot.figure()* pyplot.plot(x,y)* pyplot.show() 1.2.1 [pyplot.figure ](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.figure.htmlmatplotlib.pyplot.figure) Create a new figure.```pythonmatplotlib.pyplot.figure(num=None)``` **num** : integer or string, optional, default: ```None```The example,the num is not provided, a new figure will be created, 1.2.2 [ pyplot.plot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.htmlmatplotlib.pyplot.plot)Plot (y versus x) as lines and/or markers```python matplotlib.pyplot.plot(x, y)```  1.2.3 [pyplot.show](https://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.show.html)**Display a figure**.
###Code
plt.figure(1) # create figure with number 1
x=[1,2,3,4]
y=[1,7,3,5]
plt.plot(x,y) # plot x and y using blue circle markers
plt.show() # show figure on screen
###Output
_____no_output_____
###Markdown
1.3 Multiple figures & write them to files 1.3.1 Multiple figuresCreate a new figure.```pythonmatplotlib.pyplot.figure(num=None)``` **num** : integer or string, optional, default: ```None```* If not provided, a new figure will be created, and the figure number will be incremented. The figure objects holds this number in a number attribute.* If num is provided, * If this figure does not exists, create it and returns it. * If a figure with this id already exists, make it active, and returns a reference to it. * If num is a string, the window title will be set to this figure's numIt is possible to produce **multiple figures** Tne next example produces tow figures:**1,2**
###Code
import matplotlib.pyplot as plt
# create figure 1
plt.figure(1)
plt.plot([1,2,3,4], [1,2,3,4]) # plot on figure 1
# create figure 2
plt.figure(2)
plt.plot([1,4,2,3], [5,6,7,8]) # plot on figure 2
# figure 1 id already exists, make figure 1 active
# and returns a reference to it
# Go back to figure 1 and plotting again
plt.figure(1)
# Plot again on figure 1
plt.plot([5,6,10,3]) # plot(y) on figure 1
plt.show()
###Output
_____no_output_____
###Markdown
1. create figure 1: ```plt.figure(1)```2. create figure 2: ```plt.figure(2)```3. Go back and plotting on figure 1 ```plt.figure(1)``````python plot(y)``` pyplot.plot(y)plot $y$ using $x$ as index array $0..N-1$,using default line style and color* `pyplot.plot([5,6,10,3]) plot again on figure 1` The corresponding $x$ values default to `range(len([5, 6, 10, 3]))`( 0 to 3 in this case plot $y$ using $x$ as index array$ 0..N-1$ **Figure 1**Two lines: ```pythonplt.plot([1,2,3,4], [1,2,3,4]) Go back and plotting on figure 1plt.plot([5,6,10,3])```**Figure 2**One line: ```pythonplt.plot([1,4,2,3], [5,6,7,8])``` 1.3.2 Write figure to files```pythonplt.savefig(figurefilename)```These files can have any name you like. They will all have the file extension` .png` in the **default**.* `.png` indicates that the file is in the `Portable Networks Graphics` format. This is a public domain standard for representing imagesYou can set the figure file format,for example,**To save the plot as an SVG**[Scalable Vector Graphics (SVG)](https://en.wikipedia.org/wiki/Scalable_Vector_Graphics) is an XML-based vector image format for two-dimensional graphics with support for interactivity and animation. The SVG specification is an open standard developed by the World Wide Web Consortium (W3C) since 1999. All major modern web browsers—including Mozilla Firefox, Internet Explorer, Google Chrome, Opera, Safari, and Microsoft Edge—have SVG rendering support.
###Code
import matplotlib.pyplot as plt
plt.figure(1) #create figure 1
plt.plot([1,2,3,4], [1,2,3,4]) # plot on figure 1
plt.figure(2) #create figure 2
plt.plot([1,4,2,3], [5,6,7,8]) # plot on figure 2
#save figure 2 without extension,to the default .png
plt.savefig('./img/Figure2')
#go back to plot working on figure 1
plt.figure(1)
# plot again on figure 1
plt.plot([5,6,10,3]) # # plot y using x as index array 0..N-1,using default line style and color
# save figure 1 as an SVG
plt.savefig('./img/Figure11.svg')
!dir .\img\Figure*
###Output
驱动器 J 中的卷是 cmh
卷的序列号是 9C25-3306
J:\SEU\SEECW\SE\SEES\notebook\img 的目录
2021/03/26 10:26 11,026 Figure1.png
2021/03/26 10:26 15,622 Figure1.svg
2022/04/08 08:38 13,908 Figure11.svg
2022/04/08 08:38 11,120 Figure2.png
2021/03/26 10:26 12,119 Figure2.svg
2021/03/26 10:26 146,585 figure412.jpg
6 个文件 210,380 字节
0 个目录 92,265,152,512 可用字节
###Markdown
1.4 title,xlabel,ylabel Let’s look at the example:* the growth of an initial investment of $10,000 at an annually 5%
###Code
import matplotlib.pyplot as plt
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
plt.plot(values) # plot y using x as index array 0..N-1,using default line style and color
plt.show()
###Output
_____no_output_____
###Markdown
If we **look at the code**,**The growth of an initial investment of $10,000 at an annually*** cannot be easily inferred by looking **only at the plot `itself`**. That’s a bad thing.**All plots should have*** `informative` **titles** * all **axes** should be `labeled`.If we add to the end of our the code the lines```plt.title('5% Growth, Compounded Annually')plt.xlabel('Years of Compounding')plt.ylabel('Value of Principal ($)')```
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
plt.plot(values) # plot y using x as index array 0..N-1,using default line style and color
# add tile,xlabel,ylable
plt.title('5% Growth, Compounded Annually')
plt.xlabel('Years of Compounding')
plt.ylabel('Value of Principal ($)')
plt.show()
###Output
_____no_output_____
###Markdown
1.5 Formating plotted curve 1.5.1 Line and marker 1.5.1.1 The color, type, marker symbolsFor every plotted curve, there is an optional argument that is **a format string** indicating**the `color`,line `type` and marker `symbols` of the plot**1. The first character is color: example: b blue 2. The second characters are line type: example: - solid line 3. The third characters are marker symbols: example: + symbol The **default format** string is 'b-', which produces a blue solid line(蓝色实线).```pythonpyplot.plot(values)```If you want to plot the above with green, **dashed** line with **circle** marker symbol(绿色虚线圆点). one would replace the call by```pythonpyplot.plot(values, 'g--o')``` 1.5.1.2 line: widthTo change the line width, we can use the `linewidth` or `lw` keyword argument.```pythonplt.plot(values, linewidth =2)```
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
# green dashed line, circle marker,width = 2
plt.plot(values,'g--o',linewidth = 2)
#If we add to the end of our the code the lines
plt.title('5% Growth, Compounded Annually')
plt.xlabel('Years of Compounding')
plt.ylabel('Value of Principal ($)')
plt.show()
###Output
_____no_output_____
###Markdown
1.5.2 type size It’s also possible to change the type `size` used in plots.For example,set `fontsize````pythonplt.xlabel('Years of Compounding', fontsize = 'x-small')
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
#blue dashed line ,width =3
plt.plot(values,'b--', lw = 3)
# fontsize
plt.title('5% Growth, Compounded Annually', fontsize = 'x-large')
# fontsize
plt.xlabel('Years of Compounding', fontsize = 'x-small')
plt.ylabel('Value of Principal ($)')
plt.show()
###Output
_____no_output_____
###Markdown
Plotting Using Matplotlib Often text is the best way to communicate information, but sometimes there is alot of truth to the Chinese proverb,**图片的意义可以表达近万字** >A picture's meaning can express ten thousand words**Matplotlib**http://matplotlib.org/Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shells, the Jupyter notebook, web application servers, and four graphical user interface toolkits.Matplotlib Developers on Github: https://github.com/matplotlibUser's Guide: http://matplotlib.org/users/index.html 1 Matplotlib.pyplot[Matplotlib.pyplot](https://matplotlib.org/2.0.2/api/pyplot_api.html) provides a `MATLAB`-like plotting framework. 1.1 The Simple ExampleLet’s start with a simple example that uses `pyplot.plot` to produce the plot.
###Code
%%file ./code/python/plt111.py
import matplotlib.pyplot as plt
plt.figure() #create figure
plt.plot([1,2,3,4], [1,7,3,5]) #draw on figure 1 <x,y> list/array
plt.show() #show figure on screen
###Output
_____no_output_____
###Markdown
```>python plt111.py``` 
###Code
import matplotlib.pyplot as plt
plt.figure() #create figure 1
x=[1,2,3,4]
y=[1,7,3,5]
plt.plot(x,y) # plot x and y using default line style and color
plt.show() #show figure on screen
###Output
_____no_output_____
###Markdown
1.2 The Basic Method of PyPlot* pyplot.figure()* pyplot.plot(x,y)* pyplot.show() 1.2.1 [pyplot.figure ](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.figure.htmlmatplotlib.pyplot.figure) Create a new figure.```pythonmatplotlib.pyplot.figure(num=None)``` **num** : integer or string, optional, default: ```None```The example,the num is not provided, a new figure will be created, 1.2.2 [ pyplot.plot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.htmlmatplotlib.pyplot.plot)Plot (y versus x) as lines and/or markers```python matplotlib.pyplot.plot(x, y)```  1.2.3 [pyplot.show](https://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.show.html)**Display a figure**.
###Code
plt.figure(1) # create figure with number 1
x=[1,2,3,4]
y=[1,7,3,5]
plt.plot(x,y) # plot x and y using blue circle markers
plt.show() # show figure on screen
###Output
_____no_output_____
###Markdown
1.3 Multiple figures & write them to files 1.3.1 Multiple figuresCreate a new figure.```pythonmatplotlib.pyplot.figure(num=None)``` **num** : integer or string, optional, default: ```None```* If not provided, a new figure will be created, and the figure number will be incremented. The figure objects holds this number in a number attribute.* If num is provided, * If this figure does not exists, create it and returns it. * If a figure with this id already exists, make it active, and returns a reference to it. * If num is a string, the window title will be set to this figure's numIt is possible to produce **multiple figures** Tne next example produces tow figures:**1,2**
###Code
import matplotlib.pyplot as plt
# create figure 1
plt.figure(1)
plt.plot([1,2,3,4], [1,2,3,4]) # plot on figure 1
# create figure 2
plt.figure(2)
plt.plot([1,4,2,3], [5,6,7,8]) # plot on figure 2
# figure 1 id already exists, make figure 1 active
# and returns a reference to it
# Go back to figure 1 and plotting again
plt.figure(1)
# Plot again on figure 1
plt.plot([5,6,10,3]) # plot(y) on figure 1
plt.show()
###Output
_____no_output_____
###Markdown
1. create figure 1: ```plt.figure(1)```2. create figure 2: ```plt.figure(2)```3. Go back and plotting on figure 1 ```plt.figure(1)``````python plot(y)``` pyplot.plot(y)plot $y$ using $x$ as index array $0..N-1$,using default line style and color* `pyplot.plot([5,6,10,3]) plot again on figure 1` The corresponding $x$ values default to `range(len([5, 6, 10, 3]))`( 0 to 3 in this case plot $y$ using $x$ as index array$ 0..N-1$ **Figure 1**Two lines: ```pythonplt.plot([1,2,3,4], [1,2,3,4]) Go back and plotting on figure 1plt.plot([5,6,10,3])```**Figure 2**One line: ```pythonplt.plot([1,4,2,3], [5,6,7,8])``` 1.3.2 Write figure to files```pythonplt.savefig(figurefilename)```These files can have any name you like. They will all have the file extension` .png` in the **default**.* `.png` indicates that the file is in the `Portable Networks Graphics` format. This is a public domain standard for representing imagesYou can set the figure file format,for example,**To save the plot as an SVG**[Scalable Vector Graphics (SVG)](https://en.wikipedia.org/wiki/Scalable_Vector_Graphics) is an XML-based vector image format for two-dimensional graphics with support for interactivity and animation. The SVG specification is an open standard developed by the World Wide Web Consortium (W3C) since 1999. All major modern web browsers—including Mozilla Firefox, Internet Explorer, Google Chrome, Opera, Safari, and Microsoft Edge—have SVG rendering support.
###Code
import matplotlib.pyplot as plt
plt.figure(1) #create figure 1
plt.plot([1,2,3,4], [1,2,3,4]) # plot on figure 1
plt.figure(2) #create figure 2
plt.plot([1,4,2,3], [5,6,7,8]) # plot on figure 2
#save figure 2 without extension,to the default .png
plt.savefig('./img/Figure2')
#go back to plot working on figure 1
plt.figure(1)
# plot again on figure 1
plt.plot([5,6,10,3]) # # plot y using x as index array 0..N-1,using default line style and color
# save figure 1 as an SVG
plt.savefig('./img/Figure11.svg')
!dir .\img\Figure*
###Output
_____no_output_____
###Markdown
1.4 title,xlabel,ylabel Let’s look at the example:* the growth of an initial investment of $10,000 at an annually 5%
###Code
import matplotlib.pyplot as plt
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
plt.plot(values) # plot y using x as index array 0..N-1,using default line style and color
plt.show()
###Output
_____no_output_____
###Markdown
If we **look at the code**,**The growth of an initial investment of $10,000 at an annually*** cannot be easily inferred by looking **only at the plot `itself`**. That’s a bad thing.**All plots should have*** `informative` **titles** * all **axes** should be `labeled`.If we add to the end of our the code the lines```plt.title('5% Growth, Compounded Annually')plt.xlabel('Years of Compounding')plt.ylabel('Value of Principal ($)')```
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
plt.plot(values) # plot y using x as index array 0..N-1,using default line style and color
# add tile,xlabel,ylable
plt.title('5% Growth, Compounded Annually')
plt.xlabel('Years of Compounding')
plt.ylabel('Value of Principal ($)')
plt.show()
###Output
_____no_output_____
###Markdown
1.5 Formating plotted curve 1.5.1 Line and marker 1.5.1.1 The color, type, marker symbolsFor every plotted curve, there is an optional argument that is **a format string** indicating**the `color`,line `type` and marker `symbols` of the plot**1. The first character is color: example: b blue 2. The second characters are line type: example: - solid line 3. The third characters are marker symbols: example: + symbol The **default format** string is 'b-', which produces a blue solid line(蓝色实线).```pythonpyplot.plot(values)```If you want to plot the above with green, **dashed** line with **circle** marker symbol(绿色虚线圆点). one would replace the call by```pythonpyplot.plot(values, 'g--o')``` 1.5.1.2 line: widthTo change the line width, we can use the `linewidth` or `lw` keyword argument.```pythonplt.plot(values, linewidth =2)```
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
# green dashed line, circle marker,width = 2
plt.plot(values,'g--o',linewidth = 2)
#If we add to the end of our the code the lines
plt.title('5% Growth, Compounded Annually')
plt.xlabel('Years of Compounding')
plt.ylabel('Value of Principal ($)')
plt.show()
###Output
_____no_output_____
###Markdown
1.5.2 type size It’s also possible to change the type `size` used in plots.For example,set `fontsize````pythonplt.xlabel('Years of Compounding', fontsize = 'x-small')``
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
#blue dashed line ,width =3
plt.plot(values,'b--', lw = 3)
# fontsize
plt.title('5% Growth, Compounded Annually', fontsize = 'x-large')
# fontsize
plt.xlabel('Years of Compounding', fontsize = 'x-small')
plt.ylabel('Value of Principal ($)')
plt.show()
###Output
_____no_output_____
###Markdown
Plotting Using Matplotlibnumpy ,scipy,matplotlib Often text is the best way to communicate information, but sometimes there is alot of truth to the Chinese proverb,**图片的意义可以表达近万字** >A picture's meaning can express ten thousand words**Matplotlib**http://matplotlib.org/Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shells, the Jupyter notebook, web application servers, and four graphical user interface toolkits.Matplotlib Developers on Github: https://github.com/matplotlibUser's Guide: http://matplotlib.org/users/index.html 1 Matplotlib.pyplot[Matplotlib.pyplot](https://matplotlib.org/2.0.2/api/pyplot_api.html) provides a `MATLAB`-like plotting framework. 1.1 The Simple ExampleLet’s start with a simple example that uses `pyplot.plot` to produce the plot.
###Code
%%file ./code/python/plt111.py
import matplotlib.pyplot as plt
plt.figure() #create figure
plt.plot([1,2,3,4], [1,7,3,5]) #draw on figure 1 <x,y> list/array
plt.show() #show figure on screen
###Output
_____no_output_____
###Markdown
```>python plt111.py``` 
###Code
import matplotlib.pyplot as plt
plt.figure() #create figure 1
x=[1,2,3,4]
y=[1,7,3,5]
plt.plot(x,y) # plot x and y using default line style and color
plt.show() #show figure on screen
###Output
_____no_output_____
###Markdown
1.2 The Basic Method of PyPlot* pyplot.figure()* pyplot.plot(x,y)* pyplot.show() 1.2.1 [pyplot.figure ](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.figure.htmlmatplotlib.pyplot.figure) Create a new figure.```pythonmatplotlib.pyplot.figure(num=None)``` **num** : integer or string, optional, default: ```None```The example,the num is not provided, a new figure will be created, 1.2.2 [ pyplot.plot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.htmlmatplotlib.pyplot.plot)Plot (y versus x) as lines and/or markers```python matplotlib.pyplot.plot(x, y)```  1.2.3 [pyplot.show](https://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.show.html)**Display a figure**.
###Code
plt.figure(1) # create figure with number 1
x=[1,2,3,4]
y=[1,7,3,5]
plt.plot(x,y) # plot x and y using blue circle markers
plt.show() # show figure on screen
###Output
_____no_output_____
###Markdown
1.3 Multiple figures & write them to files 1.3.1 Multiple figuresCreate a new figure.```pythonmatplotlib.pyplot.figure(num=None)``` **num** : integer or string, optional, default: ```None```* If not provided, a new figure will be created, and the figure number will be incremented. The figure objects holds this number in a number attribute.* If num is provided, * If this figure does not exists, create it and returns it. * If a figure with this id already exists, make it active, and returns a reference to it. * If num is a string, the window title will be set to this figure's numIt is possible to produce **multiple figures** Tne next example produces tow figures:**1,2**
###Code
import matplotlib.pyplot as plt
# create figure 1
plt.figure(1)
plt.plot([1,2,3,4], [1,2,3,4]) # plot on figure 1
# create figure 2
plt.figure(2)
plt.plot([1,4,2,3], [5,6,7,8]) # plot on figure 2
# figure 1 id already exists, make figure 1 active
# and returns a reference to it
# Go back to figure 1 and plotting again
plt.figure(1)
# Plot again on figure 1
plt.plot([5,6,10,3]) # plot(y) on figure 1
plt.show()
###Output
_____no_output_____
###Markdown
1. create figure 1: ```plt.figure(1)```2. create figure 2: ```plt.figure(2)```3. Go back and plotting on figure 1 ```plt.figure(1)``````python plot(y)``` pyplot.plot(y)plot $y$ using $x$ as index array $0..N-1$,using default line style and color* `pyplot.plot([5,6,10,3]) plot again on figure 1` The corresponding $x$ values default to `range(len([5, 6, 10, 3]))`( 0 to 3 in this case plot $y$ using $x$ as index array$ 0..N-1$ **Figure 1**Two lines: ```pythonplt.plot([1,2,3,4], [1,2,3,4]) Go back and plotting on figure 1plt.plot([5,6,10,3])```**Figure 2**One line: ```pythonplt.plot([1,4,2,3], [5,6,7,8])``` 1.3.2 Write figure to files```pythonplt.savefig(figurefilename)```These files can have any name you like. They will all have the file extension` .png` in the **default**.* `.png` indicates that the file is in the `Portable Networks Graphics` format. This is a public domain standard for representing imagesYou can set the figure file format,for example,**To save the plot as an SVG**[Scalable Vector Graphics (SVG)](https://en.wikipedia.org/wiki/Scalable_Vector_Graphics) is an XML-based vector image format for two-dimensional graphics with support for interactivity and animation. The SVG specification is an open standard developed by the World Wide Web Consortium (W3C) since 1999. All major modern web browsers—including Mozilla Firefox, Internet Explorer, Google Chrome, Opera, Safari, and Microsoft Edge—have SVG rendering support.
###Code
import matplotlib.pyplot as plt
plt.figure(1) #create figure 1
plt.plot([1,2,3,4], [1,2,3,4]) # plot on figure 1
plt.figure(2) #create figure 2
plt.plot([1,4,2,3], [5,6,7,8]) # plot on figure 2
#save figure 2 without extension,to the default .png
plt.savefig('./img/Figure2')
#go back to plot working on figure 1
plt.figure(1)
# plot again on figure 1
plt.plot([5,6,10,3]) # # plot y using x as index array 0..N-1,using default line style and color
# save figure 1 as an SVG
plt.savefig('./img/Figure11.svg')
!dir .\img\Figure*
###Output
_____no_output_____
###Markdown
1.4 title,xlabel,ylabel Let’s look at the example:* the growth of an initial investment of $10,000 at an annually 5%
###Code
import matplotlib.pyplot as plt
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
plt.plot(values) # plot y using x as index array 0..N-1,using default line style and color
plt.show()
###Output
_____no_output_____
###Markdown
If we **look at the code**,**The growth of an initial investment of $10,000 at an annually*** cannot be easily inferred by looking **only at the plot `itself`**. That’s a bad thing.**All plots should have*** `informative` **titles** * all **axes** should be `labeled`.If we add to the end of our the code the lines```plt.title('5% Growth, Compounded Annually')plt.xlabel('Years of Compounding')plt.ylabel('Value of Principal ($)')```
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
plt.plot(values) # plot y using x as index array 0..N-1,using default line style and color
# add tile,xlabel,ylable
plt.title('5% Growth, Compounded Annually')
plt.xlabel('Years of Compounding')
plt.ylabel('Value of Principal ($)')
plt.show()
###Output
_____no_output_____
###Markdown
1.5 Formating plotted curve 1.5.1 Line and marker 1.5.1.1 The color, type, marker symbolsFor every plotted curve, there is an optional argument that is **a format string** indicating**the `color`,line `type` and marker `symbols` of the plot**1. The first character is color: example: b blue 2. The second characters are line type: example: - solid line 3. The third characters are marker symbols: example: + symbol The **default format** string is 'b-', which produces a blue solid line(蓝色实线).```pythonpyplot.plot(values)```If you want to plot the above with green, **dashed** line with **circle** marker symbol(绿色虚线圆点). one would replace the call by```pythonpyplot.plot(values, 'g--o')``` 1.5.1.2 line: widthTo change the line width, we can use the `linewidth` or `lw` keyword argument.```pythonplt.plot(values, linewidth =2)```
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
# green dashed line, circle marker,width = 2
plt.plot(values,'g--o',linewidth = 2)
#If we add to the end of our the code the lines
plt.title('5% Growth, Compounded Annually')
plt.xlabel('Years of Compounding')
plt.ylabel('Value of Principal ($)')
plt.show()
###Output
_____no_output_____
###Markdown
1.5.2 type size It’s also possible to change the type `size` used in plots.For example,set `fontsize````pythonplt.xlabel('Years of Compounding', fontsize = 'x-small')
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
#blue dashed line ,width =3
plt.plot(values,'b--', lw = 3)
# fontsize
plt.title('5% Growth, Compounded Annually', fontsize = 'x-large')
# fontsize
plt.xlabel('Years of Compounding', fontsize = 'x-small')
plt.ylabel('Value of Principal ($)')
plt.show()
###Output
_____no_output_____
###Markdown
Plotting Using Matplotlib Often text is the best way to communicate information, but sometimes there is alot of truth to the Chinese proverb,**图片的意义可以表达近万字** >A picture's meaning can express ten thousand words**Matplotlib**http://matplotlib.org/Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shells, the Jupyter notebook, web application servers, and four graphical user interface toolkits.Matplotlib Developers on Github: https://github.com/matplotlibUser's Guide: http://matplotlib.org/users/index.html 1 Matplotlib.pyplothttps://matplotlib.org/2.0.2/api/pyplot_api.html**Matplotlib.pyplot** provides a `MATLAB`-like plotting framework. 1.1 The Simple ExampleLet’s start with a simple example that uses `pyplot.plot` to produce the plot.
###Code
%%file ./code/python/plt111.py
import matplotlib.pyplot as plt
plt.figure() #create figure
plt.plot([1,2,3,4], [1,7,3,5]) #draw on figure 1 <x,y> list/array
plt.show() #show figure on screen
###Output
_____no_output_____
###Markdown
```>python plt111.py``` 
###Code
import matplotlib.pyplot as plt
plt.figure() #create figure 1
x=[1,2,3,4]
y=[1,7,3,5]
plt.plot(x,y) # plot x and y using default line style and color
plt.show() #show figure on screen
###Output
_____no_output_____
###Markdown
1.2 The Basic Method of PyPlot* pyplot.figure()* pyplot.plot(x,y)* pyplot.show() 1.2.1 [pyplot.figure ](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.figure.htmlmatplotlib.pyplot.figure) Create a new figure.```pythonmatplotlib.pyplot.figure(num=None)``` **num** : integer or string, optional, default: ```None```The example,the num is not provided, a new figure will be created, 1.2.2 [ pyplot.plot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.htmlmatplotlib.pyplot.plot)Plot (y versus x) as lines and/or markers```python matplotlib.pyplot.plot(x, y)```  1.2.3 [pyplot.show](https://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.show.html)**Display a figure**.
###Code
plt.figure(1) # create figure with number 1
x=[1,2,3,4]
y=[1,7,3,5]
plt.plot(x,y) # plot x and y using blue circle markers
plt.show() # show figure on screen
###Output
_____no_output_____
###Markdown
1.3 Multiple figures & write them to files 1.3.1 Multiple figuresCreate a new figure.```pythonmatplotlib.pyplot.figure(num=None)``` **num** : integer or string, optional, default: ```None```* If not provided, a new figure will be created, and the figure number will be incremented. The figure objects holds this number in a number attribute.* If num is provided, * If this figure does not exists, create it and returns it. * If a figure with this id already exists, make it active, and returns a reference to it. * If num is a string, the window title will be set to this figure's numIt is possible to produce **multiple figures** Tne next example produces tow figures:**1,2**
###Code
import matplotlib.pyplot as plt
# create figure 1
plt.figure(1)
plt.plot([1,2,3,4], [1,2,3,4]) # plot on figure 1
# create figure 2
plt.figure(2)
plt.plot([1,4,2,3], [5,6,7,8]) # plot on figure 2
# figure 1 id already exists, make figure 1 active
# and returns a reference to it
# Go back to figure 1 and plotting again
plt.figure(1)
# Plot again on figure 1
plt.plot([5,6,10,3]) # plot(y) on figure 1
plt.show()
###Output
_____no_output_____
###Markdown
1. create figure 1: ```plt.figure(1)```2. create figure 2: ```plt.figure(2)```3. Go back and plotting on figure 1 ```plt.figure(1)``````python plot(y)``` pyplot.plot(y)plot $y$ using $x$ as index array $0..N-1$,using default line style and color* `pyplot.plot([5,6,10,3]) plot again on figure 1` The corresponding $x$ values default to `range(len([5, 6, 10, 3]))`( 0 to 3 in this case plot $y$ using $x$ as index array$ 0..N-1$ **Figure 1**Two lines: ```pythonplt.plot([1,2,3,4], [1,2,3,4]) Go back and plotting on figure 1plt.plot([5,6,10,3])```**Figure 2**One line: ```pythonplt.plot([1,4,2,3], [5,6,7,8])``` 1.3.2 Write figure to files```pythonplt.savefig(figurefilename)```These files can have any name you like. They will all have the file extension` .png` in the default.* `.png` indicates that the file is in the `Portable Networks Graphics` format. This is a public domain standard for representing imagesYou can set the figure file format,for example,**To save the plot as an SVG**[Scalable Vector Graphics (SVG)](https://en.wikipedia.org/wiki/Scalable_Vector_Graphics) is an XML-based vector image format for two-dimensional graphics with support for interactivity and animation. The SVG specification is an open standard developed by the World Wide Web Consortium (W3C) since 1999. All major modern web browsers—including Mozilla Firefox, Internet Explorer, Google Chrome, Opera, Safari, and Microsoft Edge—have SVG rendering support.
###Code
import matplotlib.pyplot as plt
plt.figure(1) #create figure 1
plt.plot([1,2,3,4], [1,2,3,4]) # plot on figure 1
plt.figure(2) #create figure 2
plt.plot([1,4,2,3], [5,6,7,8]) # plot on figure 2
#save figure 2 without extension,to the default .png
plt.savefig('./img/Figure2')
#go back to plot working on figure 1
plt.figure(1)
# plot again on figure 1
plt.plot([5,6,10,3]) # # plot y using x as index array 0..N-1,using default line style and color
# save figure 1 as an SVG
plt.savefig('./img/Figure11.svg')
!dir .\img\Figure*
###Output
_____no_output_____
###Markdown
1.4 Title,xlabel,ylabel Let’s look at another example of `the growth of an initial investment of $10,000 at an annually 5%`
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
plt.plot(values) # plot y using x as index array 0..N-1,using default line style and color
plt.show()
###Output
_____no_output_____
###Markdown
If we **look at the code**,* this cannot be easily inferred by looking **only at the plot `itself`**. That’s a bad thing.**All plots should have*** `informative` **titles** * all **axes** should be `labeled`.If we add to the end of our the code the lines```plt.title('5% Growth, Compounded Annually')plt.xlabel('Years of Compounding')plt.ylabel('Value of Principal ($)')```
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
plt.plot(values) # plot y using x as index array 0..N-1,using default line style and color
# add tile,xlabel,ylable
plt.title('5% Growth, Compounded Annually')
plt.xlabel('Years of Compounding')
plt.ylabel('Value of Principal ($)')
plt.show()
###Output
_____no_output_____
###Markdown
1.5 Formating plotted curve 1.5.1 Line and marker 1.5.1.1 The color, type, marker symbolsFor every plotted curve, there is an optional argument that is **a format string** indicating**the `color`,line `type` and marker `symbols` of the plot**1. The first character is color: b blue 2. The following characters are line type: - solid line * possible linestype: ‘-‘, ‘--’, ‘-.’, ‘:’, 3. The following characters are marker symbols: + symbol * possible marker symbols: '+', 'o', '*', 's', ',', '.', '1', '2', '3', '4', ...The **default format** string is 'b-', which produces a solid blue line.If you want to plot the above with green, `dashed` line with `circle` marker symbol. one would replace the call ```pythonpyplot.plot(values)```by```pythonpyplot.plot(values, 'g--o')``` 1.5.1.2 line: widthTo change the line width, we can use the `linewidth` or `lw` keyword argument.```pythonplt.plot(values, linewidth =3)```
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
# green circle ,dashed, width = 3
plt.plot(values,'g--o',linewidth = 1)
#If we add to the end of our the code the lines
plt.title('5% Growth, Compounded Annually')
plt.xlabel('Years of Compounding')
plt.ylabel('Value of Principal ($)')
plt.show()
###Output
_____no_output_____
###Markdown
1.5.2 type size It’s also possible to change the type `size` used in plots.For example,set `fontsize````pythonplt.xlabel('Years of Compounding', fontsize = 'x-small')``
###Code
principal = 10000 #initial investment
interestRate = 0.05
years = 20
values = []
for i in range(years + 1):
values.append(principal)
principal += principal*interestRate
#line width
plt.plot(values,'b--', linewidth = 2)
# fontsize
plt.title('5% Growth, Compounded Annually', fontsize = 'xx-large')
# fontsize
plt.xlabel('Years of Compounding', fontsize = 'x-small')
plt.ylabel('Value of Principal ($)')
plt.show()
###Output
_____no_output_____ |
PyCitySchools_starter-Copy1.ipynb | ###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# define file path
schools_file = "./Resources/schools_complete.csv"
students_file = "./Resources/students_complete.csv"
# read schools file
schools_data = pd.read_csv(schools_file)
#read student file
students_data = pd.read_csv(students_file)
school_data_complete = pd.merge(students_data,schools_data,how="left", on=["school_name","school_name"])
###Output
_____no_output_____
###Markdown
District Summary* Calculate the total number of schools* Calculate the total number of students* Calculate the total budget* Calculate the average math score * Calculate the average reading score* Calculate the overall passing rate (overall average score), i.e. (avg. math score + avg. reading score)/2* Calculate the percentage of students with a passing math score (70 or greater)* Calculate the percentage of students with a passing reading score (70 or greater)* Create a dataframe to hold the above results* Optional: give the displayed data cleaner formatting
###Code
schools_count = len(school_data_complete["school_name"].unique())
students_count = school_data_complete["Student ID"].count()
total_budget = schools_data["budget"].sum()
average_math_score = school_data_complete["math_score"].mean()
average_reading_score = school_data_complete["reading_score"].mean()
overall_passing_rate = (average_math_score + average_reading_score) / 2
passing_math_count = school_data_complete[(school_data_complete["math_score"] >= 70)].count()["student_name"]
passing_math_percentage = passing_math_count / float(students_count) * 100
#------------------
passing_reading_count = school_data_complete[(school_data_complete["reading_score"] >= 70)].count()["student_name"]
passing_reading_percentage = passing_reading_count / float(students_count)* 100
district_summary = pd.DataFrame({"Total Schools" : [schools_count],
"Total Students" : [students_count],
"Total Budget" :[total_budget],
"Average Math School" : [average_math_score],
"Average reading score" : [average_reading_score],
"%Passing Math" : [passing_math_percentage],
"%Passing Reading" : [passing_reading_percentage],
"%Overal Passing Rate" : [overall_passing_rate]
})
district_summary["Total Students"] = district_summary["Total Students"].map("{:,}".format)
district_summary["Total Budget"] = district_summary["Total Budget"].map("{:,.2f}".format)
district_summary
###Output
_____no_output_____
###Markdown
School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two) * Create a dataframe to hold the above results
###Code
school_type = schools_data.set_index(["school_name"])["type"]
per_school_counts = school_data_complete["school_name"].value_counts()
per_school_budget = school_data_complete.groupby(["school_name"]).mean()["budget"]
per_school_capita = per_school_budget / per_school_counts
per_school_math = school_data_complete.groupby(["school_name"]).mean()["math_score"]
per_school_reading = school_data_complete.groupby(["school_name"]).mean()["reading_score"]
school_passing_math = school_data_complete[(school_data_complete["math_score"] >= 70)]
school_passing_reading = school_data_complete[(school_data_complete["reading_score"] >= 70)]
per_school_passing_math = school_passing_math.groupby(["school_name"]).count()["student_name"] / per_school_counts * 100
per_school_passing_reading = school_passing_reading.groupby(["school_name"]).count()["student_name"] / per_school_counts * 100
per_school_summary = pd.DataFrame({"School Type" : school_type,
"Total Student" : per_school_counts,
"total School Budget" : per_school_budget,
"Per Student Budget" : per_school_capita,
"Average Math Score" : per_school_math,
"Average Reading Score" : per_school_reading,
"%Passing Reading" : per_school_passing_reading,
"%Overall Passing Rate" : overall_passing_rate,
"%Passing Math": per_school_passing_math
})
per_school_summary
###Output
_____no_output_____
###Markdown
Bottom Performing Schools (By Passing Rate) * Sort and display the five worst-performing schools
###Code
bottom_schools = per_school_summary.sort_values(["%Overall Passing Rate"], ascending=True)
bottom_schools.head(6)
###Output
_____no_output_____
###Markdown
Math Scores by Grade * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting
###Code
ninth_grader = school_data_complete[(school_data_complete["grade"] == "9th")]
tenth_grader = school_data_complete[(school_data_complete["grade"] == "10th")]
eleventh_grader = school_data_complete[(school_data_complete["grade"] == "11th")]
twelfth_grader = school_data_complete[(school_data_complete["grade"] == "12th")]
ninth_grader_scores = ninth_grader.groupby(["school_name"]).mean()["math_score"]
tenth_grader_scores = tenth_grader.groupby(["school_name"]).mean()["math_score"]
elevent_grader_scores = eleventh_grader.groupby(["school_name"]).mean()["math_score"]
twelfth_grader_scores = twelfth_grader.groupby(["school_name"]).mean()["math_score"]
scores_by_grade = pd.DataFrame({"9th" : ninth_grader_scores, "10th" : tenth_grader_scores, "11th" : elevent_grader_scores, "12th" : twelfth_grader_scores })
scores_by_grade.index.name = None
scores_by_grade
###Output
_____no_output_____
###Markdown
Reading Score by Grade * Perform the same operations as above for reading scores
###Code
ninth_grader = school_data_complete[(school_data_complete["grade"] == "9th")]
tenth_grader = school_data_complete[(school_data_complete["grade"] == "10th")]
eleventh_grader = school_data_complete[(school_data_complete["grade"] == "11th")]
twelfth_grader = school_data_complete[(school_data_complete["grade"] == "12th")]
ninth_grader_scores = ninth_grader.groupby(["school_name"]).mean()["reading_score"]
tenth_grader_scores = tenth_grader.groupby(["school_name"]).mean()["reading_score"]
elevent_grader_scores = eleventh_grader.groupby(["school_name"]).mean()["reading_score"]
twelfth_grader_scores = twelfth_grader.groupby(["school_name"]).mean()["reading_score"]
scores_by_grade = pd.DataFrame({"9th" : ninth_grader_scores, "10th" : tenth_grader_scores, "11th" : elevent_grader_scores, "12th" : twelfth_grader_scores })
scores_by_grade.index.name = None
scores_by_grade
###Output
_____no_output_____
###Markdown
Scores by School Spending * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two)
###Code
# Sample bins. Feel free to create your own bins.
spending_bins = [0, 585, 615, 645, 675]
group_names = ["<$585", "$585-615", "$615-645", "$645-675"]
per_school_summary["Spending Ranges(Per Student)"] = pd.cut(per_school_capita, spending_bins,labels=group_names)
spending_math_scores = per_school_summary.groupby (["Spending Ranges(Per Student)"]).mean()["Average Math Score"]
spending_reading_scores = per_school_summary.groupby (["Spending Ranges(Per Student)"]).mean()["Average Reading Score"]
spending_passing_math = per_school_summary.groupby (["Spending Ranges(Per Student)"]).mean()["%Passing Math"]
spending_passing_reading = per_school_summary.groupby (["Spending Ranges(Per Student)"]).mean()["%Passing Reading"]
overal_passing_rate = (spending_passing_math + spending_passing_reading) / 2
spending_summary = pd.DataFrame({ "Average Math Score" : spending_math_scores,
"Average Reading Score" : spending_reading_scores,
"%Passing Math" : spending_passing_math,
"%Passing Reading" : spending_passing_reading,
"%Overal Passing Rate" : overal_passing_rate
})
spending_summary
###Output
_____no_output_____
###Markdown
Scores by School Size * Perform the same operations as above, based on school size.
###Code
# Sample bins. Feel free to create your own bins.
size_bins = [0, 1000, 2000, 5000]
group_names = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
per_school_summary["School Size"] = pd.cut(per_school_summary["Total Student"], size_bins, labels=group_names)
size_math_scores = per_school_summary.groupby(["School Size"]).mean()["Average Math Score"]
size_reading_scores = per_school_summary.groupby (["School Size"]).mean()["Average Reading Score"]
size_passing_math = per_school_summary.groupby (["School Size"]).mean()[ "%Passing Math"]
size_passing_reading = per_school_summary.groupby (["School Size"]).mean()["%Passing Reading"]
overal_passing_rate = (size_passing_math + size_passing_reading) / 2
size_summary = pd.DataFrame({ "Average Math Score" : size_math_scores,
"Average Reading Score" : size_reading_scores,
"%Passing Math" : size_passing_math,
"%Passing Reading" : size_passing_reading,
"%Overal Passing Rate" : overal_passing_rate
})
size_summary
###Output
_____no_output_____
###Markdown
Scores by School Type * Perform the same operations as above, based on school type.
###Code
per_school_summary.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Index: 15 entries, Bailey High School to Wright High School
Data columns (total 11 columns):
School Type 15 non-null object
Total Student 15 non-null int64
total School Budget 15 non-null float64
Per Student Budget 15 non-null float64
Average Math Score 15 non-null float64
Average Reading Score 15 non-null float64
%Passing Reading 15 non-null float64
%Overall Passing Rate 15 non-null float64
%Passing Math 15 non-null float64
Spending Ranges(Per Student) 15 non-null category
School Size 15 non-null category
dtypes: category(2), float64(7), int64(1), object(1)
memory usage: 1.5+ KB
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
import numpy as np
import os
# File to Load
school_data_to_load = "Resources/schools_complete.csv"
student_data_to_load = "Resources/students_complete.csv"
# Read School and Student Data File and store into Pandas Data Frames
school_data = pd.read_csv(school_data_to_load)
student_data = pd.read_csv(student_data_to_load)
# Combine data into a single data frame set
school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
school_data_complete.head()
###Output
_____no_output_____
###Markdown
District Summary
###Code
#Calculate the total number of schools
school_data_to_load = pd.read_csv(school_data_to_load)
total_number_of_schools = len(school_data_to_load["school_name"].unique())
total_number_of_schools
#Calculate the total number of students
student_data_to_load = pd.read_csv(student_data_to_load)
total_number_of_students = student_data_to_load["student_name"].count()
total_number_of_students
# Calculate the total budget
total_budget = school_data["budget"].sum()
total_budget
total_budget_per_student = (total_budget/total_number_of_students).sum()
total_budget_per_student = "{:.2f}".format(total_budget_per_student)
total_budget_per_student
# Calculate the average math score
average_math_score = student_data_to_load["math_score"].mean()
average_math_score = "{:.2f}".format(average_math_score)
average_math_score
# Calculate the average reading score
average_reading_score = student_data_to_load["reading_score"].mean()
average_reading_score = "{:.2f}".format(average_reading_score)
average_reading_score
# Calculate the percentage of students with a passing math score (70 or greater)
# 1 - Calculate the total # of students with a passing math score (70 or greater)
pass_math = student_data.loc[student_data["math_score"]>=70]
pass_math
# Divide pass_math by total students to determine Percentage
percent_pass_math = [(pass_math.count()/total_number_of_students)*100]
percent_pass_math
# Calculate the percentage of students with a passing reading score (70 or greater)
pass_reading = student_data.loc[student_data["reading_score"]>=70]
pass_reading
# Calculate the percentage of students with a passing reading score (70 or greater)
percent_pass_reading = [(pass_reading.count()/total_number_of_students)*100]
percent_pass_reading
# Calculate the percentage of students with a passing math AND reading score (70 or greater)
pass_math_reading=np.mean([percent_pass_reading, percent_pass_math])
pass_math_reading = round(pass_math_reading)
pass_math_reading
# Create a dataframe to hold the above District results
district_summary = pd.DataFrame({"Total Schools" : [total_number_of_schools],
"Total Students" : [total_number_of_students],
"Total Budget" : [total_budget],
"Average Math Score" : [average_math_score],
"Average Reading Score" : [average_reading_score],
"% Passing Math" : [percent_pass_math],
"% Passing Reading" : [percent_pass_reading],
"% Overall Passing" : [pass_math_reading]
})
#Format the Dataframe
district_summary["Total Students"] = district_summary["Total Students"].map('{:,}'.format)
district_summary["% Overall Passing"] = district_summary["% Overall Passing"].map('{}%'.format)
district_summary["Average Math Score"] = district_summary["Average Math Score"].map('{}%'.format)
district_summary["Average Reading Score"] = district_summary["Average Reading Score"].map('{}%'.format)
district_summary
###Output
_____no_output_____
###Markdown
School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * % Overall Passing (The percentage of students that passed math **and** reading.) * Create a dataframe to hold the above results
###Code
#school name
school_name = merge_df.set_index('school').groupby(['school'])
#school types
sch_types = sch_df.set_index('school')['type']
#Total Students
total_stu = school_name["Student ID"].count()
#Total School Budget
total_budg = sch_df.set_index('school')['budget']
#Per Student Budget
per_stu_budget = total_budg/total_stu
#Average Math Score
avg_math_score = school_name["math_score"].mean()
#Average Reading Score
avg_reading_score = school_name["reading_score"].mean()
# % Passing Math
percent_pass_math = merge_df.loc[merge_df["math_score"] >= 70].groupby('school')['Student ID'].count()/total_stu
# % Passing reading
percent_pass_reading = merge_df.loc[merge_df["reading_score"] >= 70].groupby('school')['Student ID'].count()/total_stu
# Overall Passing Rate
overall_pass_rate = (percent_pass_math + percent_pass_reading)/2
#create school summary dataframe
school_summary = pd.DataFrame({
"School Type" : sch_types,
"Total Students" : total_stu,
"Total School Budget" : total_budg,
"Per Student Budget" : per_stu_budget,
"Average Math Score" : avg_math_score,
"Average Reading Score" : avg_reading_score,
"% Passing Math": percent_pass_math,
"% Passing Reading" : percent_pass_reading,
"Overall Passing Rate" : overall_pass_rate
})
#rearrange the order
school_summary = school_summary [[ "School Type",
"Total Students",
"Total School Budget",
"Per Student Budget",
"Average Math Score",
"Average Reading Score",
"% Passing Math",
"% Passing Reading",
"Overall Passing Rate"
]]
#format the df
school_summary['Total Students'] = school_summary['Total Students'].map("{:,}".format)
school_summary['Total School Budget'] = school_summary['Total School Budget'].map("{:,}".format)
school_summary['Per Student Budget'] = school_summary['Per Student Budget'].map("{:.2f}".format)
school_summary['Average Math Score'] = school_summary['Average Math Score'].map("{:.1f}".format)
school_summary['Average Reading Score'] = school_summary['Average Reading Score'].map("{:.1f}".format)
school_summary['% Passing Math'] = school_summary['% Passing Math'].map("{:.2%}".format)
school_summary['% Passing Reading'] = school_summary['% Passing Reading'].map("{:.2%}".format)
school_summary['Overall Passing Rate'] = school_summary['Overall Passing Rate'].map("{:.2%}".format)
school_summary
###Output
_____no_output_____
###Markdown
Top Performing Schools (By % Overall Passing) * Sort and display the top five performing schools by % overall passing.
###Code
best_schools = school_summary_table.sort_values(["% Overall Passing Rate"], ascending=False)
best_schools.reset_index(inplace=False)
best_schools.index.name = None
best_schools.head()
###Output
_____no_output_____
###Markdown
Bottom Performing Schools (By % Overall Passing) * Sort and display the five worst-performing schools by % overall passing.
###Code
worst_schools = school_summary_table.sort_values(["% Overall Passing Rate"], ascending=True)
worst_schools.reset_index(inplace=False)
worst_schools.index.name = None
worst_schools.head()
###Output
_____no_output_____
###Markdown
Math Scores by Grade * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting
###Code
freshmen = school_data_complete.loc[school_data_complete["grade"] == "9th"]
frosh_group = freshmen.groupby("school_name")
frosh_avg = frosh_group.mean()
sophomore = school_data_complete.loc[school_data_complete["grade"] == "10th"]
soph_group = sophomore.groupby("school_name")
soph_avg = soph_group.mean()
juniors = school_data_complete.loc[school_data_complete["grade"] == "11th"]
jrs_group = juniors.groupby("school_name")
jrs_avg = jrs_group.mean()
seniors = school_data_complete.loc[school_data_complete["grade"] == "12th"]
srs_group = seniors.groupby("school_name")
srs_avg = srs_group.mean()
grade_summary_under = pd.merge(frosh_avg, soph_avg, on="school_name", suffixes=("_fr", "_soph"))
grade_summary_upper = pd.merge(jrs_avg, srs_avg, on="school_name", suffixes=("_jrs", "_srs"))
grade_summary_total = pd.merge(grade_summary_under, grade_summary_upper, on="school_name")
just_math = grade_summary_total[["math_score_fr", "math_score_soph", "math_score_jrs", "math_score_srs"]]
renamed_math = just_math.rename(columns={"math_score_fr": "9th", "math_score_soph": "10th", "math_score_jrs": "11th", "math_score_srs": "12th"})
renamed_math.index.name = None
renamed_math
###Output
_____no_output_____
###Markdown
Reading Score by Grade * Perform the same operations as above for reading scores
###Code
just_reading = grade_summary_total[["reading_score_fr", "reading_score_soph", "reading_score_jrs", "reading_score_srs"]]
renamed_reading = just_reading.rename(columns={"reading_score_fr": "9th", "reading_score_soph": "10th", "reading_score_jrs": "11th", "reading_score_srs": "12th"})
renamed_reading.index.name = None
renamed_reading
###Output
_____no_output_____
###Markdown
Scores by School Spending * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two)
###Code
spending_bins = [0, 600, 625, 650, 675]
group_names = ["<$600", "$600-625", "$626-650", "$651-675"]
summary_table_bins = pd.DataFrame({
"Per Student Budget": (group_school["budget"].mean())/(group_school["student_name"].count()),
"Average Math Score": (group_school["math_score"].mean()),
"Average Reading Score": (group_school["reading_score"].mean()),
"% Passing Math": (math_group["student_name"].count())/(group_school["student_name"].count())*100,
"% Passing Reading": (reading_group["student_name"].count())/(group_school["student_name"].count())*100,
"% Overall Passing Rate": (reading_group["student_name"].count()+math_group["student_name"].count())/(2*group_school["student_name"].count())*100
})
summary_table_bins["Spending Ranges(Per Student)"] = pd.cut(summary_table_bins["Per Student Budget"], spending_bins, labels=group_names)
budget_group = summary_table_bins.groupby("Spending Ranges(Per Student)").mean()
no_psb = budget_group[["Average Math Score", "Average Reading Score", "% Passing Math", "% Passing Reading", "% Overall Passing Rate"]]
no_psb.head()
###Output
_____no_output_____
###Markdown
Scores by School Size * Perform the same operations as above, based on school size.
###Code
size_bins = [0, 1000, 2000, 5000]
group_names = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
size_summary_table = pd.DataFrame({
"Total Students": group_school["student_name"].count(),
"Average Math Score": (group_school["math_score"].mean()),
"Average Reading Score": (group_school["reading_score"].mean()),
"% Passing Math": (math_group["student_name"].count())/(group_school["student_name"].count())*100,
"% Passing Reading": (reading_group["student_name"].count())/(group_school["student_name"].count())*100,
"% Overall Passing Rate": (reading_group["student_name"].count()+math_group["student_name"].count())/(2*group_school["student_name"].count())*100
})
size_summary_table["School Size"] = pd.cut(size_summary_table["Total Students"], size_bins, labels=group_names)
size_group = size_summary_table.groupby("School Size").mean()
no_ts = size_group[["Average Math Score", "Average Reading Score", "% Passing Math", "% Passing Reading", "% Overall Passing Rate"]]
no_ts.head()
###Output
_____no_output_____
###Markdown
Scores by School Type * Perform the same operations as above, based on school type
###Code
#school_data_complete
type_data = school_data_complete.groupby("type")
type_data.head()
students_type = type_data["student_name"].count()
#pass_math_school = school_data_complete.loc[school_data_complete["math_score"] >= 70]
math_type = pass_math_school.groupby("type")
#pass_reading_school = school_data_complete.loc[school_data_complete["reading_score"] >= 70]
reading_type = pass_reading_school.groupby("type")
type_summary_table = pd.DataFrame({
"Average Math Score": (type_data["math_score"].mean()),
"Average Reading Score": (type_data["reading_score"].mean()),
"% Passing Math": (math_type["student_name"].count())/(type_data["student_name"].count())*100,
"% Passing Reading": (reading_type["student_name"].count())/(type_data["student_name"].count())*100,
"% Overall Passing Rate": (reading_type["student_name"].count()+math_type["student_name"].count())/(2*type_data["student_name"].count())*100
})
type_summary_table
###Output
_____no_output_____ |
module_4_decision_trees/4_1_decision_trees.ipynb | ###Markdown
Lab 4.1: Decision TreesDecision trees can be used for either regression or classification tasks. Decision trees are a powerful tool; however, are very prone to overfitting the training dataset and therefore often fail to generalize well to test data sets. However, they are the building block for several other powerful machine learning algorithms and are therefore important to learn about.What we'll be doing in this notebook:-----1. Import packages2. Load data3. Build a Decision Tree4. Tune parameters5. Feature importance6. Homework7. Advanced materialOur previous Linear regression model assumes linearity among others.Whereas decision trees and associated algorithms are no longer restricted to independent variables which have a linear relationship and we don't have to ensure several assumptions are true. Therefore we can start to bring in other features that could be useful.After we run our decision trees, we will compare our new output to our output from the linear regressions we ran in the previous notebook. In this notebook, we will be looking at how we can predict the loan amount using decision trees. Here is visual introduction to [decision trees](https://algobeans.com/2016/07/27/decision-trees-tutorial/) 1. Import packages
###Code
import graphviz
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.tree import DecisionTreeRegressor
from sklearn import tree
from sklearn.metrics import mean_squared_error, r2_score
###Output
_____no_output_____
###Markdown
If you do not have graphviz installed or are having problems displaying the tree structure later on, try:Mac/Windows:```bash$ brew install graphviz ```Linux:```$ sudo apt-get install graphviz``` 2. Load and format data
###Code
# Load data to pandas DataFrame
data_path = '../data/'
df = pd.read_csv(data_path+'clean_data.csv.zip',
low_memory=False)
df.head()
###Output
_____no_output_____
###Markdown
We are going to build regressors to predict the loan amount and we will build a tree that considers many the features in the dataset - including those we have engineered ourselves.Here we choose a limited subset of data to conduct the analysis for the sake of training time. In practice, we should use more features. This is a mix of numeric and one hot-coded categorical variables.
###Code
cols = df[['loan_amount',
'partner_delinquency_rate',
'posted_year',
'posted_month',
'num_tags',
'#Woman Owned Biz',
'age_int',
'#Repeat Borrower',
'children_int',
'terms.repayment_term',
'pct_female',
'exploratory_partner',
'partner_dollar_amount',
'top_partner_id',
'days_to_fund']]
###Output
_____no_output_____
###Markdown
We are going to build regressors to predict the loan amount and we will build a tree that considers many the features in the dataset - including those we have engineered ourselves.
###Code
y = cols['loan_amount']
# drop returns a copy of the DataFrame with the specified columns removed.
X = cols.drop('loan_amount', axis=1)
# Split data into training and testing sets;
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
3. Build a Decision TreeWe will use sklearn's implementation of a Decision Tree Regressor and to learn how to use it, here are the [docs](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.htmlsklearn.tree.DecisionTreeRegressor.get_params), or simply put a question mark before a call to the class. Prepending a ? to any method, variable, or class will display that method's defined docstring (way to go IPython!)
###Code
DecisionTreeRegressor?
###Output
_____no_output_____
###Markdown
Many of the sklearn algorithms are implemented using the same standard steps: - **Step 1: Initiate the algorithm** Define the parameters (& hyperparameters of the algorithm) of the algorithm. For example, the maximum depth, the minimum samples in a leaf etc. (check documentation for more information)- **Step 2: Train the algorithm** Train the algorithm by fitting it to the X_train and y_train datasets.- **Step 3: Evaluating the algorithm** Evaluate the predictive power of the algorithm by comparing the predictive loan amount values to the true values. We can do this for the training and testing dataset. Here is a function which encapsulates the 3 model implementation steps; Initialize, Train, Evaluate our decision tree regressor.
###Code
def train_score_regressor(sklearn_regressor, X_train, y_train, X_test, y_test, model_parameters):
'''
Purpose:
- train a regressor on training data
- score data on training and test data
- return trained model
'''
# Step 1: Initializing the sklearn regressor
regressor = sklearn_regressor(**model_parameters)
# Step 2: Training the algorithm using the X_train dataset of features and y_train, the associated target features
regressor.fit(X_train, y_train)
# Step 3: Calculating the score of the predictive power on the training and testing dataset.
training_score = regressor.score(X_train, y_train)
testing_score = regressor.score(X_test, y_test)
# Print the results!
print(f"Train score: {training_score:.4}")
print(f"Test score: {testing_score:.4}")
return regressor
###Output
_____no_output_____
###Markdown
With all tree algorithms the major challenge is using the parameters to balance the bias vs variance trade-off. To start, check how the model preforms when using the default values.
###Code
trained_regressor = train_score_regressor(sklearn_regressor = DecisionTreeRegressor,
X_train = X_train, y_train = y_train,
X_test = X_test, y_test = y_test,
model_parameters = {'random_state':42})
###Output
_____no_output_____
###Markdown
Our module managed to get a perfect r2 scored on the training data but performs poorly on the test data. This is a clear indication that the model has **overfit to the training data**.The default sklearn's implementation of a DecisionTreeRegressor does not put any restrictions on the depth of the tree, the number of samples per leaf, etc. Consequently, the model finds signal in the noise of the training data set, overfits and performs poorly on the test data. When a model overfits to a training data set, we say it has **high variance**. Since an unconstrained decision tree will almost perfectly model any training data, it will vary tremendously depending on the training data that is provided. 4. Parameter tuningTo reduce the variance, we constrain the model using some of the provided parameters for example:- Criterion (Cost function used to measure the purity of a split)- Maximum depth of the tree- Minimum samples for each node split- Minimum samples for each terminal node- Maximum number of terminal nodesLook back over the [slides](https://docs.google.com/presentation/d/1leWPbwis9GJHJcQehlhPhtKEAErUPvlTpKjnkv1aWWU/edit?usp=sharing) or use this [useful blog](https://www.analyticsvidhya.com/blog/2016/04/complete-tutorial-tree-based-modeling-scratch-in-python/four) for a refresher on decision tree parameters.Initially, we are going to experiment with the max_depth parameter only.
###Code
# Define the model parameters
# We are fixing the random state so that the results are reproducible and consistent.
parameters = {"max_depth":6,'random_state':42}
# Train and evaluate the model
trained_regressor = train_score_regressor(sklearn_regressor=DecisionTreeRegressor,
X_train=X_train, y_train=y_train,
X_test=X_test, y_test=y_test,
model_parameters = parameters)
###Output
_____no_output_____
###Markdown
Although the training r2 score has dropped significantly, the test r2 score increased. Since the goal is develop a model that accurately predict data we have never seen, that is the metric we care about!Now that we have increased preformance, let's take a look at what the Decision Tree looks like.
###Code
# from the sklearn tree library, create image of trained decision tree
dot_data = tree.export_graphviz(trained_regressor, out_file=None,
feature_names=X_train.columns,
class_names=y_train.values,
filled=True, rounded=True,
special_characters=True)
# use graphviz to render the image
graph = graphviz.Source(dot_data)
graph
###Output
_____no_output_____
###Markdown
** IMPORTANT**A DecisionTreeRegressor with a max depth of only 4 is still rather complicated. To develop your intuition for the various input parameters, manually adjust them up and down to see the impacts.Overall we're aiming for the highest predictive power on the test set. However, if you were to tune the parameters manually towards a higher score on the test data set, we would overfit to this specific test data set and the model would not generalize well to a secondary test data set. To avoid this, we will use k-fold validation. In addition to k-fold validation, we will use sklearn's GridSearchCV, which allows us using k-fold validation to assess every permuation of possible values for the parameters that we provide. See the [Advanced Material](AdvancedCV) at the bottom of this notebook for a quick overview of these two methods.**Note** since we are training one regressor one time for each possible permutation of specified parameter values, this next cell will take some time to run. That is why you need to gain an intuition for which values to test!
###Code
# Set parameters to search through - known as parameter grid
parameters = {'max_depth':[8,10,14],
'min_impurity_decrease': [.1,.01, 0.0],
'min_samples_split': [10, 50, 2]}
# Initialize model
decision_regressor= DecisionTreeRegressor(random_state=42)
# Initialize GridSearch and then fit
regressor = GridSearchCV(decision_regressor, parameters)
regressor.fit(X_train, y_train)
# print out what GridSearchCV found to be the best parameters
regressor.best_estimator_.get_params()
# evaluate the tuned model
trained_regressor = train_score_regressor(sklearn_regressor = DecisionTreeRegressor,
X_train = X_train, y_train = y_train,
X_test = X_test, y_test = y_test,
model_parameters = regressor.best_estimator_.get_params())
###Output
_____no_output_____
###Markdown
Performance on the test data has increased again - not bad!The R^2 number above is pretty telling but it is always good to visualise how these look in a scattor plot.
###Code
# plotting a graph of the true values vs the predicted values for the training and test datasets
def plot_y_yhat_scatter(y_actual,y_predicted,train_test):
ax = sns.regplot(x=y_actual, y=y_predicted, fit_reg=False)
ax.set_xlabel('true values')
ax.set_ylabel('predicted values')
ax.set_title('Relationship between true and predicted loan amounts: '+train_test+' results')
pass
plot_y_yhat_scatter(y_train, trained_regressor.predict(X_train),train_test = "training")
plot_y_yhat_scatter(y_test, trained_regressor.predict(X_test),train_test = "test")
###Output
_____no_output_____
###Markdown
5. Feature ImportanceWe can look at which features are driving our model's predictions by examining the feature importance. Remember the magnitude of the 'importance' is not indicative of how important the feature is, only the order matters!For example,- feature A has an importance of 0.5 - feature B has an importance of 0.25. All we can take away is that feature A explains more variance then feature B, **not** that feature A explains twice as much as feature B.
###Code
# Get the feature importances from our final trained model...
importances = trained_regressor.feature_importances_
# Find the indices of the feature importances in descending order
indices = np.argsort(importances)[::-1]
# Plotting a bar chart of feature importances in descending order
plt.figure(figsize=(12,7))
sns.barplot(y=X_train.columns[indices],x=importances[indices])
###Output
_____no_output_____
###Markdown
There is not a clear relationship between any single feature and the loan_amount. The most important feature borrower count for One Acre Fund during their high loan period - this is very specific to just a small subset of the data. However the aggregate of these features together into the decision leads to effective predictions (R^2 ~ 0.66). This is a testament to the predictive power of decisions trees!Remember that Decision Trees can also be used to classify data.For example some interesting classification questions we could investigate are:- Can we classify which loans expired and which one got funded?- Is a loan posted by a male or female? 6. Advanced Material: Optimising the algorithm K-folds example for finding optimal parameters K-folds is a method of evaluating and tuning a model on the given dataset without overfitting to either the training dataset or the testing dataset. It finds the optimal balance between bias and variance in the model. Below we show how the model performs on the training and test datasets while varying the max tree depth.
###Code
# define max depth range
depth_range = np.asarray(range(2,22,2))
# initialize empty arrays to store the results
scores_train = np.zeros(len(depth_range))
scores_test = np.zeros(len(depth_range))
for i in range(len(depth_range)):
# train DTR with given max depth
dt_regressor = DecisionTreeRegressor(max_depth=depth_range[i], random_state=42)
model = dt_regressor.fit(X_train, y_train)
# evaluate on both training and test datasets
scores_train[i] = model.score(X_train, y_train)
scores_test[i] = model.score(X_test, y_test)
# plot the results on the same graph
ax = sns.regplot(x=depth_range, y=scores_train, order=3, ci=None,label='train')
sns.regplot(x=depth_range, y=scores_test,order=3, ci=None, label='test', ax=ax)
ax.legend(loc='best')
ax.set_ylabel('R2 from regression between true and predicted values')
ax.set_xlabel('Max depth of the tree')
###Output
_____no_output_____
###Markdown
As the depth increases:- The training score increases- But the testing score decreasesOnce the test score starts decreasing, this indicates that the model is overfitting. We could be tempted to say that the optimal depth is 8 as this corresponds to the maximum score for the testing data. **This is not always the case**. The test set is just random fixed subset of data so choosing the optimal parameter here would be overfitting to the testset. This is where K-Folds cross validation comes in! This method does the following:- Splits the dataset K equal random subsests- Trains the data on K-1 subsets- Evaluates performance on Kth left out subset- Stores evaluation metric- Repeats for K times for each random subsetIf K = 5, the algorithm trains 5 times. Each time it holds out a 5th of the data, trains on the other 4/5ths and then evaluates the performance on the held out 5th. Here is an example of how the cross validation score changes with maximum tree depth
###Code
# initialize empty array to store results
scores_cv = np.empty(len(depth_range))
for i in range(len(depth_range)):
# initialize model
dt_regressor = DecisionTreeRegressor(max_depth=depth_range[i], random_state=42)
# calculate the cross val scores. This returns an array where each element corresponds to the performance on each k-fold.
cv_scores = cross_val_score(dt_regressor, X_train, y_train,cv=5, n_jobs=-1)
# calculate mean cross validation score and save
scores_cv[i] = np.mean(cv_scores)
# plot results
ax = sns.regplot(x=depth_range, y=scores_cv, ci=None, order=3)
ax.set_xlabel('Max depth of the tree')
ax.set_ylabel('Average cross validated R2')
###Output
_____no_output_____
###Markdown
Again we see the same general trend of the score increasing intially and then dropping off. From this curve, the optimal max_depth would be between 8 and 10. GridSearchCV (CV = cross validation)Above we were looking at a single parameter. However, to increase performance we should adjust several parameters. Sklearn's GridSearchCV uses the cross-validation above to assess the performance of **each possible permutation** of the hyper-parameters that you specify. For this reason, care should be taken to choose the correct range of parameters to search through as adding an additional parameter can increase the search time exponentially.It then returns a model initialised with the optimal parameters.
###Code
GridSearchCV?
parameters = {'min_impurity_decrease': [.1, 0.01, 0.],
'max_depth': [None, 5, 8, 10]}
# initialize model
gridrf = DecisionTreeRegressor()
# set up and fit gridsearchCV
grid_rf = GridSearchCV(gridrf, parameters)
grid_rf.fit(X_train, y_train)
# evaluate the tuned model
trained_regressor = train_score_regressor(sklearn_regressor = DecisionTreeRegressor,
X_train = X_train, y_train = y_train,
X_test = X_test, y_test = y_test,
model_parameters = grid_rf.best_estimator_.get_params())
###Output
_____no_output_____
###Markdown
We can check the variation in the mean cross validation score for the different parameter permutations in the grid search and see which parameters have the biggest impact on performance. In this particulatr case, it shows the max_depth has the biggest impact.
###Code
# get the cross validation mean score and associated std across the K folds
means = grid_rf.cv_results_['mean_test_score']
stds = grid_rf.cv_results_['std_test_score']
# print the mean, std and parameters for each permutation
for mean, std, params in zip(means, stds, grid_rf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
###Output
_____no_output_____
###Markdown
Lab 4.1: Decision Trees [Page 14](https://drive.google.com/file/d/1Sd_LN-WE_W3Zo-YZrMBe90H2i4_ieFRs/view)Decision trees can be used for either regression or classification tasks. Decision trees are a powerful tool; however, are very prone to overfitting the training dataset and therefore often fail to generalize well to test data sets. However, they are the building block for several other powerful machine learning algorithms and are therefore important to learn about.What we'll be doing in this notebook:-----1. Import packages2. Load data3. Build a Decision Tree4. Tune parameters5. Feature importance6. Homework7. Advanced materialOur previous Linear regression model assumes linearity among others.Whereas decision trees and associated algorithms are no longer restricted to independent variables which have a linear relationship and we don't have to ensure several assumptions are true. Therefore we can start to bring in other features that could be useful.After we run our decision trees, we will compare our new output to our output from the linear regressions we ran in the previous notebook. In this notebook, we will be looking at how we can predict the loan amount using decision trees. Here is visual introduction to [decision trees](https://algobeans.com/2016/07/27/decision-trees-tutorial/) ----Install additional programs-----You need to have [graphviz](https://www.graphviz.org/) installed to display the tree structure later on.Mac/Windows:```bash$ brew install graphviz ```Linux:```bash$ sudo apt-get install graphviz``` 1. Import packages
###Code
import graphviz
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.tree import DecisionTreeRegressor
from sklearn import tree
from sklearn.metrics import mean_squared_error, r2_score
###Output
_____no_output_____
###Markdown
2. Load and format data
###Code
# Load data saved locally
path = '../data/'
filename = 'loans.csv'
df = pd.read_csv(path+filename)
# Load data from Github if using colab
!git clone https://github.com/DeltaAnalytics/machine_learning_for_good_data
df = pd.read_csv("machine_learning_for_good_data/loans.csv")
###Output
_____no_output_____
###Markdown
We are going to build regressors to predict the loan amount and we will build a tree that considers many the features in the dataset - including those we have engineered ourselves.Here we choose a limited subset of data to conduct the analysis for the sake of training time. In practice, we should use more features. This is a mix of numeric and one hot-coded categorical variables. We are going to build regressors to predict the loan amount and we will build a tree that considers many the features in the dataset - including those we have engineered ourselves.
###Code
# Drop everything that is not numeric
df = df.select_dtypes(exclude=['object'])
y_column = 'loan_amount'
y = df[y_column]
# Drop returns a copy of the DataFrame with the specified columns removed.
X = df.drop([y_column, "id_number"], axis=1) # id_number will not be helpful
# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
3. Build a Decision TreeWe will use sklearn's implementation of a Decision Tree Regressor and to learn how to use it, here are the [docs](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.htmlsklearn.tree.DecisionTreeRegressor.get_params), or simply put a question mark before a call to the class. Prepending a ? to any method, variable, or class will display that method's defined docstring (way to go IPython!)
###Code
DecisionTreeRegressor?
###Output
_____no_output_____
###Markdown
Many of the sklearn algorithms are implemented using the same standard steps: - **Step 1: Initiate the algorithm** Define the parameters (& hyperparameters of the algorithm) of the algorithm. For example, the maximum depth, the minimum samples in a leaf etc. (check documentation for more information)- **Step 2: Train the algorithm** Train the algorithm by fitting it to the X_train and y_train datasets.- **Step 3: Evaluating the algorithm** Evaluate the predictive power of the algorithm by comparing the predictive loan amount values to the true values. We can do this for the training and testing dataset. Here is a function which encapsulates the 3 model implementation steps; Initialize, Train, Evaluate our decision tree regressor.
###Code
def train_score_regressor(sklearn_regressor, X_train, y_train, X_test, y_test, model_parameters, print_oob_score=False):
"""A helper function that:
- Trains a regressor on training data
- Scores data on training and test data
- Returns a trained model
"""
# Step 1: Initializing the sklearn regressor
regressor = sklearn_regressor(**model_parameters)
# Step 2: Training the algorithm using the X_train dataset of features and y_train, the associated target features
regressor.fit(X_train, y_train)
# Step 3: Calculating the score of the predictive power on the training and testing dataset.
training_score = regressor.score(X_train, y_train)
testing_score = regressor.score(X_test, y_test)
# Print the results!
print(f"Train score: {training_score:>5.4f}")
print(f"Test score: {testing_score:>7.4f}")
if print_oob_score:
print(f"OOB score: {regressor.oob_score_:>8.4f}")
return regressor
###Output
_____no_output_____
###Markdown
With all tree algorithms the major challenge is using the parameters to balance the bias vs variance trade-off. To start, check how the model preforms when using the default values.
###Code
trained_regressor = train_score_regressor(sklearn_regressor=DecisionTreeRegressor,
X_train=X_train,
y_train=y_train,
X_test=X_test,
y_test=y_test,
model_parameters={'random_state':42})
###Output
Train score: 0.9830
Test score: 0.8886
###Markdown
Our module managed to get a perfect r2 scored on the training data but performs poorly on the test data. This is a clear indication that the model has **overfit to the training data**.The default sklearn's implementation of a DecisionTreeRegressor does not put any restrictions on the depth of the tree, the number of samples per leaf, etc. Consequently, the model finds signal in the noise of the training data set, overfits and performs poorly on the test data. When a model overfits to a training data set, we say it has **high variance**. Since an unconstrained decision tree will almost perfectly model any training data, it will vary tremendously depending on the training data that is provided. 4. Parameter tuningTo reduce the variance, we constrain the model using some of the provided parameters for example:- Criterion (Cost function used to measure the purity of a split)- Maximum depth of the tree- Minimum samples for each node split- Minimum samples for each terminal node- Maximum number of terminal nodesLook back over the [slides](https://docs.google.com/presentation/d/1leWPbwis9GJHJcQehlhPhtKEAErUPvlTpKjnkv1aWWU/edit?usp=sharing) or use this [useful blog](https://www.analyticsvidhya.com/blog/2016/04/complete-tutorial-tree-based-modeling-scratch-in-python/four) for a refresher on decision tree parameters.Initially, we are going to experiment with the max_depth parameter only.
###Code
# Define the model parameters
# We are fixing the random state so that the results are reproducible and consistent.
parameters = {"max_depth":6,
'random_state':42}
# Train and evaluate the model
trained_regressor = train_score_regressor(sklearn_regressor=DecisionTreeRegressor,
X_train=X_train,
y_train=y_train,
X_test=X_test,
y_test=y_test,
model_parameters=parameters)
###Output
Train score: 0.9319
Test score: 0.9390
###Markdown
Although the training r2 score has dropped significantly, the test r2 score increased. Since the goal is develop a model that accurately predict data we have never seen, that is the metric we care about!Now that we have increased preformance, let's take a look at what the Decision Tree looks like.
###Code
# from the sklearn tree library, create image of trained decision tree
dot_data = tree.export_graphviz(trained_regressor, out_file=None,
feature_names=X_train.columns,
class_names=y_train.values,
filled=True, rounded=True,
special_characters=True)
# use graphviz to render the image
graph = graphviz.Source(dot_data)
graph
###Output
_____no_output_____
###Markdown
** IMPORTANT**A DecisionTreeRegressor with a max depth of only 4 is still rather complicated. To develop your intuition for the various input parameters, manually adjust them up and down to see the impacts.Overall we're aiming for the highest predictive power on the test set. However, if you were to tune the parameters manually towards a higher score on the test data set, we would overfit to this specific test data set and the model would not generalize well to a secondary test data set. To avoid this, we will use k-fold validation. In addition to k-fold validation, we will use sklearn's GridSearchCV, which allows us using k-fold validation to assess every permuation of possible values for the parameters that we provide. See the [Advanced Material](AdvancedCV) at the bottom of this notebook for a quick overview of these two methods.**Note** since we are training one regressor one time for each possible permutation of specified parameter values, this next cell will take some time to run. That is why you need to gain an intuition for which values to test!
###Code
# Set parameters to search through - known as parameter grid
parameters = {'max_depth':[8,10,14],
'min_impurity_decrease': [.1,.01, 0.0],
'min_samples_split': [10, 50, 2]}
# Initialize model
decision_regressor= DecisionTreeRegressor(random_state=42)
# Initialize GridSearch and then fit
regressor = GridSearchCV(decision_regressor, parameters)
regressor.fit(X_train, y_train)
# print out what GridSearchCV found to be the best parameters
regressor.best_estimator_.get_params()
# evaluate the tuned model
trained_regressor = train_score_regressor(sklearn_regressor=DecisionTreeRegressor,
X_train=X_train,
y_train=y_train,
X_test=X_test,
y_test=y_test,
model_parameters=regressor.best_estimator_.get_params())
###Output
Train score: 0.9462
Test score: 0.9280
###Markdown
Performance on the test data has increased again - not bad!The R^2 number above is pretty telling but it is always good to visualise how these look in a scattor plot.
###Code
# plotting a graph of the true values vs the predicted values for the training and test datasets
def plot_y_yhat_scatter(y_actual,y_predicted,train_test):
ax = sns.regplot(x=y_actual, y=y_predicted, fit_reg=False)
ax.set_xlabel('true values')
ax.set_ylabel('predicted values')
ax.set_title('Relationship between true and predicted loan amounts: '+train_test+' results')
pass
plot_y_yhat_scatter(y_train, trained_regressor.predict(X_train),train_test = "training")
plot_y_yhat_scatter(y_test, trained_regressor.predict(X_test),train_test = "test")
###Output
_____no_output_____
###Markdown
5. Feature ImportanceWe can look at which features are driving our model's predictions by examining the feature importance. Remember the magnitude of the 'importance' is not indicative of how important the feature is, only the order matters!For example,- feature A has an importance of 0.5 - feature B has an importance of 0.25. All we can take away is that feature A explains more variance then feature B, **not** that feature A explains twice as much as feature B.
###Code
# Get the feature importances from our final trained model...
importances = trained_regressor.feature_importances_
# Find the indices of the feature importances in descending order
indices = np.argsort(importances)[::-1]
# Plotting a bar chart of feature importances in descending order
plt.figure(figsize=(12,7))
sns.barplot(y=X_train.columns[indices],x=importances[indices]);
###Output
_____no_output_____
###Markdown
There is not a clear relationship between any single feature and the loan_amount. The most important feature borrower count for One Acre Fund during their high loan period - this is very specific to just a small subset of the data. However the aggregate of these features together into the decision leads to effective predictions (R^2 ~ 0.66). This is a testament to the predictive power of decisions trees!Remember that Decision Trees can also be used to classify data.For example some interesting classification questions we could investigate are:- Can we classify which loans expired and which one got funded?- Is a loan posted by a male or female? 6. Advanced Material: Optimising the algorithm K-folds example for finding optimal parameters K-folds is a method of evaluating and tuning a model on the given dataset without overfitting to either the training dataset or the testing dataset. It finds the optimal balance between bias and variance in the model. Below we show how the model performs on the training and test datasets while varying the max tree depth.
###Code
# define max depth range
depth_range = np.asarray(range(2,22,2))
# initialize empty arrays to store the results
scores_train = np.zeros(len(depth_range))
scores_test = np.zeros(len(depth_range))
for i in range(len(depth_range)):
# train DTR with given max depth
dt_regressor = DecisionTreeRegressor(max_depth=depth_range[i], random_state=42)
model = dt_regressor.fit(X_train, y_train)
# evaluate on both training and test datasets
scores_train[i] = model.score(X_train, y_train)
scores_test[i] = model.score(X_test, y_test)
# plot the results on the same graph
ax = sns.regplot(x=depth_range, y=scores_train, order=3, ci=None,label='train')
sns.regplot(x=depth_range, y=scores_test,order=3, ci=None, label='test', ax=ax)
ax.legend(loc='best')
ax.set_ylabel('R2 from regression between true and predicted values')
ax.set_xlabel('Max depth of the tree')
###Output
_____no_output_____
###Markdown
As the depth increases:- The training score increases- But the testing score decreasesOnce the test score starts decreasing, this indicates that the model is overfitting. We could be tempted to say that the optimal depth is 8 as this corresponds to the maximum score for the testing data. **This is not always the case**. The test set is just random fixed subset of data so choosing the optimal parameter here would be overfitting to the testset. This is where K-Folds cross validation comes in! This method does the following:- Splits the dataset K equal random subsests- Trains the data on K-1 subsets- Evaluates performance on Kth left out subset- Stores evaluation metric- Repeats for K times for each random subsetIf K = 5, the algorithm trains 5 times. Each time it holds out a 5th of the data, trains on the other 4/5ths and then evaluates the performance on the held out 5th. Here is an example of how the cross validation score changes with maximum tree depth
###Code
# initialize empty array to store results
scores_cv = np.empty(len(depth_range))
for i in range(len(depth_range)):
# initialize model
dt_regressor = DecisionTreeRegressor(max_depth=depth_range[i], random_state=42)
# calculate the cross val scores. This returns an array where each element corresponds to the performance on each k-fold.
cv_scores = cross_val_score(dt_regressor, X_train, y_train,cv=5, n_jobs=-1)
# calculate mean cross validation score and save
scores_cv[i] = np.mean(cv_scores)
# plot results
ax = sns.regplot(x=depth_range, y=scores_cv, ci=None, order=3);
ax.set_xlabel('Max depth of the tree');
ax.set_ylabel('Average cross validated R2');
###Output
_____no_output_____
###Markdown
Again we see the same general trend of the score increasing intially and then dropping off. From this curve, the optimal max_depth would be between 8 and 10. GridSearchCV (CV = cross validation)Above we were looking at a single parameter. However, to increase performance we should adjust several parameters. Sklearn's GridSearchCV uses the cross-validation above to assess the performance of **each possible permutation** of the hyper-parameters that you specify. For this reason, care should be taken to choose the correct range of parameters to search through as adding an additional parameter can increase the search time exponentially.It then returns a model initialised with the optimal parameters.
###Code
GridSearchCV?
parameters = {'min_impurity_decrease': [.1, 0.01, 0.],
'max_depth': [None, 5, 8, 10]}
# initialize model
gridrf = DecisionTreeRegressor()
# set up and fit gridsearchCV
grid_rf = GridSearchCV(gridrf, parameters)
grid_rf.fit(X_train, y_train)
# evaluate the tuned model
trained_regressor = train_score_regressor(sklearn_regressor=DecisionTreeRegressor,
X_train=X_train,
y_train=y_train,
X_test=X_test,
y_test=y_test,
model_parameters=grid_rf.best_estimator_.get_params())
###Output
Train score: 0.9230
Test score: 0.7213
###Markdown
We can check the variation in the mean cross validation score for the different parameter permutations in the grid search and see which parameters have the biggest impact on performance. In this particulatr case, it shows the max_depth has the biggest impact.
###Code
# get the cross validation mean score and associated std across the K folds
means = grid_rf.cv_results_['mean_test_score']
stds = grid_rf.cv_results_['std_test_score']
# print the mean, std and parameters for each permutation
for mean, std, params in zip(means, stds, grid_rf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
###Output
0.837 (+/-0.091) for {'max_depth': None, 'min_impurity_decrease': 0.1}
0.843 (+/-0.085) for {'max_depth': None, 'min_impurity_decrease': 0.01}
0.844 (+/-0.074) for {'max_depth': None, 'min_impurity_decrease': 0.0}
0.825 (+/-0.119) for {'max_depth': 5, 'min_impurity_decrease': 0.1}
0.886 (+/-0.064) for {'max_depth': 5, 'min_impurity_decrease': 0.01}
0.825 (+/-0.119) for {'max_depth': 5, 'min_impurity_decrease': 0.0}
0.883 (+/-0.064) for {'max_depth': 8, 'min_impurity_decrease': 0.1}
0.822 (+/-0.110) for {'max_depth': 8, 'min_impurity_decrease': 0.01}
0.882 (+/-0.065) for {'max_depth': 8, 'min_impurity_decrease': 0.0}
0.867 (+/-0.063) for {'max_depth': 10, 'min_impurity_decrease': 0.1}
0.867 (+/-0.064) for {'max_depth': 10, 'min_impurity_decrease': 0.01}
0.867 (+/-0.066) for {'max_depth': 10, 'min_impurity_decrease': 0.0}
###Markdown
Lab 4.1: Decision Trees [Page 14](https://drive.google.com/file/d/1Sd_LN-WE_W3Zo-YZrMBe90H2i4_ieFRs/view)Decision trees can be used for either regression or classification tasks. Decision trees are a powerful tool; however, are very prone to overfitting the training dataset and therefore often fail to generalize well to test data sets. However, they are the building block for several other powerful machine learning algorithms and are therefore important to learn about.What we'll be doing in this notebook:-----1. Import packages2. Load data3. Build a Decision Tree4. Tune parameters5. Feature importance6. Homework7. Advanced materialOur previous Linear regression model assumes linearity among others.Whereas decision trees and associated algorithms are no longer restricted to independent variables which have a linear relationship and we don't have to ensure several assumptions are true. Therefore we can start to bring in other features that could be useful.After we run our decision trees, we will compare our new output to our output from the linear regressions we ran in the previous notebook. In this notebook, we will be looking at how we can predict the loan amount using decision trees. Here is visual introduction to [decision trees](https://algobeans.com/2016/07/27/decision-trees-tutorial/) ----Install additional programs-----You need to have [graphviz](https://www.graphviz.org/) installed to display the tree structure later on.Mac/Windows:```bash$ brew install graphviz ```Linux:```bash$ sudo apt-get install graphviz``` 1. Import packages
###Code
import graphviz
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.tree import DecisionTreeRegressor
from sklearn import tree
from sklearn.metrics import mean_squared_error, r2_score
###Output
_____no_output_____
###Markdown
2. Load and format data
###Code
# Load data saved locally
path = '../data/'
filename = 'loans.csv'
df = pd.read_csv(path+filename)
# Load data from Github if using colab
!git clone https://github.com/DeltaAnalytics/machine_learning_for_good_data
df = pd.read_csv("machine_learning_for_good_data/loans.csv")
###Output
_____no_output_____
###Markdown
We are going to build regressors to predict the loan amount and we will build a tree that considers many the features in the dataset - including those we have engineered ourselves.Here we choose a limited subset of data to conduct the analysis for the sake of training time. In practice, we should use more features. This is a mix of numeric and one hot-coded categorical variables. We are going to build regressors to predict the loan amount and we will build a tree that considers many the features in the dataset - including those we have engineered ourselves.
###Code
# Drop everything that is not numeric
df = df.select_dtypes(exclude=['object'])
y_column = 'loan_amount'
y = df[y_column]
# Drop returns a copy of the DataFrame with the specified columns removed.
X = df.drop([y_column, "id_number"], axis=1) # id_number will not be helpful
# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
3. Build a Decision TreeWe will use sklearn's implementation of a Decision Tree Regressor and to learn how to use it, here are the [docs](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.htmlsklearn.tree.DecisionTreeRegressor.get_params), or simply put a question mark before a call to the class. Prepending a ? to any method, variable, or class will display that method's defined docstring (way to go IPython!)
###Code
DecisionTreeRegressor?
###Output
_____no_output_____
###Markdown
Many of the sklearn algorithms are implemented using the same standard steps: - **Step 1: Initiate the algorithm** Define the parameters (& hyperparameters of the algorithm) of the algorithm. For example, the maximum depth, the minimum samples in a leaf etc. (check documentation for more information)- **Step 2: Train the algorithm** Train the algorithm by fitting it to the X_train and y_train datasets.- **Step 3: Evaluating the algorithm** Evaluate the predictive power of the algorithm by comparing the predictive loan amount values to the true values. We can do this for the training and testing dataset. Here is a function which encapsulates the 3 model implementation steps; Initialize, Train, Evaluate our decision tree regressor.
###Code
def train_score_regressor(sklearn_regressor, X_train, y_train, X_test, y_test, model_parameters, print_oob_score=False):
"""A helper function that:
- Trains a regressor on training data
- Scores data on training and test data
- Returns a trained model
"""
# Step 1: Initializing the sklearn regressor
regressor = sklearn_regressor(**model_parameters)
# Step 2: Training the algorithm using the X_train dataset of features and y_train, the associated target features
regressor.fit(X_train, y_train)
# Step 3: Calculating the score of the predictive power on the training and testing dataset.
training_score = regressor.score(X_train, y_train)
testing_score = regressor.score(X_test, y_test)
# Print the results!
print(f"Train score: {training_score:>5.4f}")
print(f"Test score: {testing_score:>7.4f}")
if print_oob_score:
print(f"OOB score: {regressor.oob_score_:>8.4f}")
return regressor
###Output
_____no_output_____
###Markdown
With all tree algorithms the major challenge is using the parameters to balance the bias vs variance trade-off. To start, check how the model preforms when using the default values.
###Code
trained_regressor = train_score_regressor(sklearn_regressor=DecisionTreeRegressor,
X_train=X_train,
y_train=y_train,
X_test=X_test,
y_test=y_test,
model_parameters={'random_state':42})
###Output
Train score: 0.9830
Test score: 0.8886
###Markdown
Our module managed to get a perfect r2 scored on the training data but performs poorly on the test data. This is a clear indication that the model has **overfit to the training data**.The default sklearn's implementation of a DecisionTreeRegressor does not put any restrictions on the depth of the tree, the number of samples per leaf, etc. Consequently, the model finds signal in the noise of the training data set, overfits and performs poorly on the test data. When a model overfits to a training data set, we say it has **high variance**. Since an unconstrained decision tree will almost perfectly model any training data, it will vary tremendously depending on the training data that is provided. 4. Parameter tuningTo reduce the variance, we constrain the model using some of the provided parameters for example:- Criterion (Cost function used to measure the purity of a split)- Maximum depth of the tree- Minimum samples for each node split- Minimum samples for each terminal node- Maximum number of terminal nodesLook back over the [slides](https://docs.google.com/presentation/d/1leWPbwis9GJHJcQehlhPhtKEAErUPvlTpKjnkv1aWWU/edit?usp=sharing) or use this [useful blog](https://www.analyticsvidhya.com/blog/2016/04/complete-tutorial-tree-based-modeling-scratch-in-python/four) for a refresher on decision tree parameters.Initially, we are going to experiment with the max_depth parameter only.
###Code
# Define the model parameters
# We are fixing the random state so that the results are reproducible and consistent.
parameters = {"max_depth":6,
'random_state':42}
# Train and evaluate the model
trained_regressor = train_score_regressor(sklearn_regressor=DecisionTreeRegressor,
X_train=X_train,
y_train=y_train,
X_test=X_test,
y_test=y_test,
model_parameters=parameters)
###Output
Train score: 0.9319
Test score: 0.9390
###Markdown
Although the training r2 score has dropped significantly, the test r2 score increased. Since the goal is develop a model that accurately predict data we have never seen, that is the metric we care about!Now that we have increased preformance, let's take a look at what the Decision Tree looks like.
###Code
# from the sklearn tree library, create image of trained decision tree
dot_data = tree.export_graphviz(trained_regressor, out_file=None,
feature_names=X_train.columns,
class_names=y_train.values,
filled=True, rounded=True,
special_characters=True)
# use graphviz to render the image
graph = graphviz.Source(dot_data)
graph
###Output
_____no_output_____
###Markdown
** IMPORTANT**A DecisionTreeRegressor with a max depth of only 4 is still rather complicated. To develop your intuition for the various input parameters, manually adjust them up and down to see the impacts.Overall we're aiming for the highest predictive power on the test set. However, if you were to tune the parameters manually towards a higher score on the test data set, we would overfit to this specific test data set and the model would not generalize well to a secondary test data set. To avoid this, we will use k-fold validation. In addition to k-fold validation, we will use sklearn's GridSearchCV, which allows us using k-fold validation to assess every permuation of possible values for the parameters that we provide. See the [Advanced Material](AdvancedCV) at the bottom of this notebook for a quick overview of these two methods.**Note** since we are training one regressor one time for each possible permutation of specified parameter values, this next cell will take some time to run. That is why you need to gain an intuition for which values to test!
###Code
# Set parameters to search through - known as parameter grid
parameters = {'max_depth':[8,10,14],
'min_impurity_decrease': [.1,.01, 0.0],
'min_samples_split': [10, 50, 2]}
# Initialize model
decision_regressor= DecisionTreeRegressor(random_state=42)
# Initialize GridSearch and then fit
regressor = GridSearchCV(decision_regressor, parameters)
regressor.fit(X_train, y_train)
# print out what GridSearchCV found to be the best parameters
regressor.best_estimator_.get_params()
# evaluate the tuned model
trained_regressor = train_score_regressor(sklearn_regressor=DecisionTreeRegressor,
X_train=X_train,
y_train=y_train,
X_test=X_test,
y_test=y_test,
model_parameters=regressor.best_estimator_.get_params())
###Output
Train score: 0.9462
Test score: 0.9280
###Markdown
Performance on the test data has increased again - not bad!The R^2 number above is pretty telling but it is always good to visualise how these look in a scattor plot.
###Code
# plotting a graph of the true values vs the predicted values for the training and test datasets
def plot_y_yhat_scatter(y_actual,y_predicted,train_test):
ax = sns.regplot(x=y_actual, y=y_predicted, fit_reg=False)
ax.set_xlabel('true values')
ax.set_ylabel('predicted values')
ax.set_title('Relationship between true and predicted loan amounts: '+train_test+' results')
pass
plot_y_yhat_scatter(y_train, trained_regressor.predict(X_train),train_test = "training")
plot_y_yhat_scatter(y_test, trained_regressor.predict(X_test),train_test = "test")
###Output
_____no_output_____
###Markdown
5. Feature ImportanceWe can look at which features are driving our model's predictions by examining the feature importance. Remember the magnitude of the 'importance' is not indicative of how important the feature is, only the order matters!For example,- feature A has an importance of 0.5 - feature B has an importance of 0.25. All we can take away is that feature A explains more variance then feature B, **not** that feature A explains twice as much as feature B.
###Code
# Get the feature importances from our final trained model...
importances = trained_regressor.feature_importances_
# Find the indices of the feature importances in descending order
indices = np.argsort(importances)[::-1]
# Plotting a bar chart of feature importances in descending order
plt.figure(figsize=(12,7))
sns.barplot(y=X_train.columns[indices],x=importances[indices]);
###Output
_____no_output_____
###Markdown
There is not a clear relationship between any single feature and the loan_amount. The most important feature borrower count for One Acre Fund during their high loan period - this is very specific to just a small subset of the data. However the aggregate of these features together into the decision leads to effective predictions (R^2 ~ 0.66). This is a testament to the predictive power of decisions trees!Remember that Decision Trees can also be used to classify data.For example some interesting classification questions we could investigate are:- Can we classify which loans expired and which one got funded?- Is a loan posted by a male or female? 6. Advanced Material: Optimising the algorithm K-folds example for finding optimal parameters K-folds is a method of evaluating and tuning a model on the given dataset without overfitting to either the training dataset or the testing dataset. It finds the optimal balance between bias and variance in the model. Below we show how the model performs on the training and test datasets while varying the max tree depth.
###Code
# define max depth range
depth_range = np.asarray(range(2,22,2))
# initialize empty arrays to store the results
scores_train = np.zeros(len(depth_range))
scores_test = np.zeros(len(depth_range))
for i in range(len(depth_range)):
# train DTR with given max depth
dt_regressor = DecisionTreeRegressor(max_depth=depth_range[i], random_state=42)
model = dt_regressor.fit(X_train, y_train)
# evaluate on both training and test datasets
scores_train[i] = model.score(X_train, y_train)
scores_test[i] = model.score(X_test, y_test)
# plot the results on the same graph
ax = sns.regplot(x=depth_range, y=scores_train, order=3, ci=None,label='train')
sns.regplot(x=depth_range, y=scores_test,order=3, ci=None, label='test', ax=ax)
ax.legend(loc='best')
ax.set_ylabel('R2 from regression between true and predicted values')
ax.set_xlabel('Max depth of the tree')
###Output
_____no_output_____
###Markdown
As the depth increases:- The training score increases- But the testing score decreasesOnce the test score starts decreasing, this indicates that the model is overfitting. We could be tempted to say that the optimal depth is 8 as this corresponds to the maximum score for the testing data. **This is not always the case**. The test set is just random fixed subset of data so choosing the optimal parameter here would be overfitting to the testset. This is where K-Folds cross validation comes in! This method does the following:- Splits the dataset K equal random subsests- Trains the data on K-1 subsets- Evaluates performance on Kth left out subset- Stores evaluation metric- Repeats for K times for each random subsetIf K = 5, the algorithm trains 5 times. Each time it holds out a 5th of the data, trains on the other 4/5ths and then evaluates the performance on the held out 5th. Here is an example of how the cross validation score changes with maximum tree depth
###Code
# initialize empty array to store results
scores_cv = np.empty(len(depth_range))
for i in range(len(depth_range)):
# initialize model
dt_regressor = DecisionTreeRegressor(max_depth=depth_range[i], random_state=42)
# calculate the cross val scores. This returns an array where each element corresponds to the performance on each k-fold.
cv_scores = cross_val_score(dt_regressor, X_train, y_train,cv=5, n_jobs=-1)
# calculate mean cross validation score and save
scores_cv[i] = np.mean(cv_scores)
# plot results
ax = sns.regplot(x=depth_range, y=scores_cv, ci=None, order=3);
ax.set_xlabel('Max depth of the tree');
ax.set_ylabel('Average cross validated R2');
###Output
_____no_output_____
###Markdown
Again we see the same general trend of the score increasing intially and then dropping off. From this curve, the optimal max_depth would be between 8 and 10. GridSearchCV (CV = cross validation)Above we were looking at a single parameter. However, to increase performance we should adjust several parameters. Sklearn's GridSearchCV uses the cross-validation above to assess the performance of **each possible permutation** of the hyper-parameters that you specify. For this reason, care should be taken to choose the correct range of parameters to search through as adding an additional parameter can increase the search time exponentially.It then returns a model initialised with the optimal parameters.
###Code
GridSearchCV?
parameters = {'min_impurity_decrease': [.1, 0.01, 0.],
'max_depth': [None, 5, 8, 10]}
# initialize model
gridrf = DecisionTreeRegressor()
# set up and fit gridsearchCV
grid_rf = GridSearchCV(gridrf, parameters)
grid_rf.fit(X_train, y_train)
# evaluate the tuned model
trained_regressor = train_score_regressor(sklearn_regressor=DecisionTreeRegressor,
X_train=X_train,
y_train=y_train,
X_test=X_test,
y_test=y_test,
model_parameters=grid_rf.best_estimator_.get_params())
###Output
Train score: 0.9230
Test score: 0.7213
###Markdown
We can check the variation in the mean cross validation score for the different parameter permutations in the grid search and see which parameters have the biggest impact on performance. In this particulatr case, it shows the max_depth has the biggest impact.
###Code
# get the cross validation mean score and associated std across the K folds
means = grid_rf.cv_results_['mean_test_score']
stds = grid_rf.cv_results_['std_test_score']
# print the mean, std and parameters for each permutation
for mean, std, params in zip(means, stds, grid_rf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
###Output
0.837 (+/-0.091) for {'max_depth': None, 'min_impurity_decrease': 0.1}
0.843 (+/-0.085) for {'max_depth': None, 'min_impurity_decrease': 0.01}
0.844 (+/-0.074) for {'max_depth': None, 'min_impurity_decrease': 0.0}
0.825 (+/-0.119) for {'max_depth': 5, 'min_impurity_decrease': 0.1}
0.886 (+/-0.064) for {'max_depth': 5, 'min_impurity_decrease': 0.01}
0.825 (+/-0.119) for {'max_depth': 5, 'min_impurity_decrease': 0.0}
0.883 (+/-0.064) for {'max_depth': 8, 'min_impurity_decrease': 0.1}
0.822 (+/-0.110) for {'max_depth': 8, 'min_impurity_decrease': 0.01}
0.882 (+/-0.065) for {'max_depth': 8, 'min_impurity_decrease': 0.0}
0.867 (+/-0.063) for {'max_depth': 10, 'min_impurity_decrease': 0.1}
0.867 (+/-0.064) for {'max_depth': 10, 'min_impurity_decrease': 0.01}
0.867 (+/-0.066) for {'max_depth': 10, 'min_impurity_decrease': 0.0}
|
src/bayesian_optimization/reward_func_experimentation.ipynb | ###Markdown
miniSCOT Functions
###Code
def invoke_miniscot(x):
"""
Handling single API call to miniSCOT simulation given some inputs
x contains parameter configs x = [x0 x1 ...]
- The order of parameters in x should follow the order specified in the parameter_space declaration
- E.g. here we specify num_batteries = x[0]
"""
kwargs = {
'time_horizon': 336,
'num_batteries': int(x[0])
}
if len(x) == 2:
kwargs.update({
'max_battery_capacity': int(x[1])
})
cum_reward = run_simulation(**kwargs)
return cum_reward[-1]
def f(X):
"""
Handling multiple API calls to miniSCOT simulation given some inputs
X is a matrix of parameters
- Each row is a set of parameters
- The order of parameters in the row should follow the order specified in the parameter_space declaration
"""
Y = []
for x in X:
cum_reward = invoke_miniscot(x)
# Note that we negate the reward; want to find min
Y.append(-cum_reward[-1])
Y = np.reshape(np.array(Y), (-1, 1))
return Y
def f_multiprocess(X):
"""
Handling multiple API calls to miniSCOT simulation given some inputs using multiprocessing.
X is a matrix of parameters
- Each row is a set of parameters
- The order of parameters in the row should follow the order specified in the parameter_space declaration
"""
# Set to None to use all available CPU
max_pool = None
with Pool(max_pool) as p:
Y = list(
tqdm(
p.imap(invoke_miniscot, X),
total=X.shape[0]
)
)
# Note that we negate the reward; want to find min
Y = -np.reshape(np.array(Y), (-1, 1))
return Y
###Output
_____no_output_____
###Markdown
Plotting Functions
###Code
def plot_reward(X, Y, labels):
"""
Plots reward against a maximum of two dimensions.
"""
plt.style.use('seaborn')
fig = plt.figure(figsize=(10,10))
order = np.argsort(X[:,0])
if X.shape[1] == 1:
ax = plt.axes()
ax.plot(X[order,0], Y[order])
ax.set_xlabel(labels[0])
ax.set_ylabel("Cumulative reward")
elif X.shape[1] == 2:
ax = plt.axes(projection='3d')
im = ax.plot_trisurf(X[order,0].flatten(), X[order,1].flatten(), Y[order].flatten(), cmap=cm.get_cmap('viridis'))
fig.colorbar(im)
ax.set_xlabel(labels[0])
ax.set_ylabel(labels[1])
ax.set_zlabel("Cumulative reward")
else:
raise ValueError('X has too many dimensions to plot - max 2 allowed')
return fig, ax
###Output
_____no_output_____
###Markdown
Investigation Parameter Space
###Code
max_num_batteries = 500
min_battery_capacity = 140
max_battery_capacity = 160
num_data_points = 10
timsteps_per_week = 336
num_weeks = 52
num_batteries = DiscreteParameter('num_batteries', range(0, max_num_batteries+1))
max_battery_capacities = DiscreteParameter('max_battery_capacity', range(min_battery_capacity, max_battery_capacity+1))
time_horizon = DiscreteParameter('time_horizon', [i for i in range(0, num_weeks*timsteps_per_week, timsteps_per_week)])
# parameter_space = ParameterSpace([num_batteries])
parameter_space = ParameterSpace([num_batteries, max_battery_capacities])
design = RandomDesign(parameter_space)
X = design.get_samples(num_data_points)
X
###Output
_____no_output_____
###Markdown
Example Run The same code appears at the top of the Emukit cell below. Optionally run this to check whether we get a convex function.
###Code
Y = f_multiprocess(X)
plot_reward(X, Y, parameter_space.parameter_names)
###Output
_____no_output_____
###Markdown
Emukit Bayesian Optimisation
###Code
successful_sample = False
num_tries = 0
max_num_tries = 3
use_default= False
use_ard=False
while not successful_sample and num_tries < max_num_tries:
print(f"CURRENT ATTEMPT #{num_tries}")
X = design.get_samples(num_data_points)
Y = f_multiprocess(X)
# plot init values
plot_reward(X, Y, parameter_space.parameter_names)
# emulator model
if use_default:
gpy_model = GPRegression(X, Y)
else:
kernel = GPy.kern.RBF(1, lengthscale=1e1, variance=1e4, ARD=use_ard)
gpy_model = GPy.models.GPRegression(X, Y, kernel, noise_var=1e-10)
try:
gpy_model.optimize()
print("okay to optimize")
model_emukit = GPyModelWrapper(gpy_model)
# Load core elements for Bayesian optimization
expected_improvement = ExpectedImprovement(model=model_emukit)
optimizer = GradientAcquisitionOptimizer(space=parameter_space)
# Create the Bayesian optimization object
batch_size = 3
bayesopt_loop = BayesianOptimizationLoop(model=model_emukit,
space=parameter_space,
acquisition=expected_improvement,
batch_size=batch_size)
# Run the loop and extract the optimum; we either complete 10 steps or converge
max_iters = 10
stopping_condition = (
FixedIterationsStoppingCondition(i_max=max_iters) | ConvergenceStoppingCondition(eps=0.01)
)
bayesopt_loop.run_loop(f_multiprocess, stopping_condition)
print("successfully ran loop")
successful_sample = True
except:
num_tries += 1
num_tries
# X, Y
new_X, new_Y = bayesopt_loop.loop_state.X, bayesopt_loop.loop_state.Y
new_order = np.argsort(new_X[:,0])
new_X = new_X[new_order,:]
new_Y = new_Y[new_order]
# new_X, new_Y
###Output
_____no_output_____
###Markdown
Visualize and Get Extrema Simple Plot
###Code
plot_reward(new_X, new_Y, parameter_space.parameter_names)
###Output
_____no_output_____
###Markdown
2D Plot
###Code
x_plot = np.reshape(np.array([i for i in range(0, max_num_batteries)]), (-1,1))
mu_plot, var_plot = model_emukit.predict(x_plot)
# plt.figure(figsize=(12, 8))
plt.figure(figsize=(7, 5))
LEGEND_SIZE = 15
plt.plot(new_X, new_Y, "ro", markersize=10, label="All observations")
plt.plot(X, Y, "bo", markersize=10, label="Initial observations")
# plt.plot(x_plot, y_plot, "k", label="Objective Function")
plt.plot(x_plot, mu_plot, "C0", label="Model")
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - np.sqrt(var_plot)[:, 0], color="C0", alpha=0.6)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 2 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 2 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.4)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 3 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 3 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.2)
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
# plt.xlim(0, 25)
plt.show()
###Output
_____no_output_____
###Markdown
3D Plots Inferred Surface
###Code
plt.style.use('seaborn')
fig = plt.figure(figsize=(10,10))
ax = plt.axes(projection='3d')
im = ax.plot_trisurf(new_X[:,0].flatten(), new_X[:,1].flatten(), new_Y.flatten(), cmap=cm.get_cmap('viridis'), alpha=0.75)
fig.colorbar(im)
ax.scatter(X[:,0].flatten(), X[:,1].flatten(), Y.flatten(), s=100, marker="o", color="b", label="Initial observations")
ax.scatter(new_X[:,0].flatten(), new_X[:,1].flatten(), new_Y.flatten(), marker="x", color="r", label="All observations")
ax.legend(loc=2, prop={'size': LEGEND_SIZE})
ax.set_xlabel(r"$x_1$")
ax.set_ylabel(r"$x_2$")
ax.set_ylabel(r"$f(x)$")
ax.grid(True)
###Output
_____no_output_____
###Markdown
Prediction Surface
###Code
mesh_X, mesh_Y = np.mgrid[1:max_num_batteries+1:1, min_battery_capacity:max_battery_capacity+1:1]
positions = np.vstack([mesh_X.ravel(), mesh_Y.ravel()]).T
mu_plot, var_plot = model_emukit.predict(positions)
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(mesh_X, mesh_Y, mu_plot.reshape((500,21)), cmap=cm.coolwarm,linewidth=0, antialiased=False)
###Output
_____no_output_____
###Markdown
Old Code
###Code
# X = design.get_samples(num_data_points)
# Y = f(X)
# # emulator model
# use_default= False
# use_ard=True
# if use_default:
# gpy_model = GPRegression(X, Y)
# else:
# kernel = GPy.kern.RBF(1, lengthscale=1e1, variance=1e4, ARD=use_ard)
# gpy_model = GPy.models.GPRegression(X, Y, kernel, noise_var=1e-10)
# gpy_model.optimize()
# model_emukit = GPyModelWrapper(gpy_model)
# # Load core elements for Bayesian optimization
# expected_improvement = ExpectedImprovement(model=model_emukit)
# optimizer = GradientAcquisitionOptimizer(space=parameter_space)
# # Create the Bayesian optimization object
# batch_size = 3
# bayesopt_loop = BayesianOptimizationLoop(model=model_emukit,
# space=parameter_space,
# acquisition=expected_improvement,
# batch_size=batch_size)
# # Run the loop and extract the optimum; we either complete 10 steps or converge
# max_iters = 10
# stopping_condition = FixedIterationsStoppingCondition(
# i_max=max_iters) | ConvergenceStoppingCondition(eps=0.01)
# bayesopt_loop.run_loop(f, stopping_condition)
###Output
_____no_output_____ |
prediction/multitask/fine-tuning/program synthesis/large_model.ipynb | ###Markdown
**Generate the program based on the question using codeTrans multitask finetuning model**You can make free prediction online through this Link (When using the prediction online, you need to parse and tokenize the code first.) **1. Load necessry libraries including huggingface transformers**
###Code
!pip install -q transformers sentencepiece
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
###Output
_____no_output_____
###Markdown
**2. Load the token classification pipeline and load it into the GPU if avilabile**
###Code
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask_finetune", skip_special_tokens=True),
device=0
)
###Output
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_auto.py:970: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
###Markdown
**3 Give the question for generating the code, parse and tokenize it**
###Code
question = "you are given an array of numbers a and a number b, compute the difference of elements in a and b" #@param {type:"raw"}
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize
def englishTokenizer(sentence):
result = []
tokens = word_tokenize(sentence)
for t in tokens:
if( not len(t)>50):
result.append(t)
return ' '.join(result)
tokenized_question = englishTokenizer(question)
print("tokenized question: " + tokenized_question)
###Output
tokenized question: you are given an array of numbers a and a number b , compute the difference of elements in a and b
###Markdown
**4. Make Prediction**
###Code
pipeline([tokenized_question])
###Output
_____no_output_____ |
ml/feature_importance/permutation_importance.ipynb | ###Markdown
Permutation ImportancePermutation feature importance is a model inspection technique that can be used for any fitted estimator when the data is tabular. This is especially useful for non-linear or opaque estimators. The permutation feature importance is defined to be the decrease in a model score when a single feature value is randomly shuffled 1. This procedure breaks the relationship between the feature and the target, thus the drop in the model score is indicative of how much the model depends on the feature. This technique benefits from being model agnostic and can be calculated many times with different permutations of the feature.The `permutation_importance` function calculates the feature importance of estimators for a given dataset. The n_repeats parameter sets the number of times a feature is randomly shuffled and returns a sample of feature importances.**Warning**Features that are deemed of low importance for a bad model (low cross-validation score) could be very important for a good model. Therefore it is always important to evaluate the predictive power of a model using a held-out set (or better with cross-validation) prior to computing importances. Permutation importance does not reflect to the intrinsic predictive value of a feature by itself but how important this feature is for a particular model. Correlated FeaturesWhen two features are correlated and one of the features is permuted, the model will still have access to the feature through its correlated feature. This will result in a lower importance value for both features, where they might actually be important.One way to handle this is to cluster features that are correlated and only keep one feature from each cluster. This strategy is explored in the following example: Permutation Importance with Multicollinear or Correlated Features.
###Code
import pandas as pd
from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge
diabetes = load_diabetes()
X_train, X_val, y_train, y_val = train_test_split(
diabetes.data, diabetes.target, random_state=0)
model = Ridge(alpha=1e-2).fit(X_train, y_train)
model.score(X_val, y_val)
df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)
df['target'] = pd.Series(diabetes.target)
df.head()
from sklearn.inspection import permutation_importance
r = permutation_importance(model, X_val, y_val,
n_repeats=30,
random_state=0)
for i in r.importances_mean.argsort()[::-1]:
if r.importances_mean[i] - 2 * r.importances_std[i] > 0:
print(f"{diabetes.feature_names[i]:<8}"
f"{r.importances_mean[i]:.3f}"
f" +/- {r.importances_std[i]:.3f}")
for i in r.importances_mean.argsort()[::-1]:
print(f"{diabetes.feature_names[i]:<8}"
f"{r.importances_mean[i]:.3f}"
f" +/- {r.importances_std[i]:.3f}"
)
###Output
s5 0.204 +/- 0.050
bmi 0.176 +/- 0.048
bp 0.088 +/- 0.033
sex 0.056 +/- 0.023
s1 0.042 +/- 0.031
s4 0.003 +/- 0.008
s6 0.003 +/- 0.003
s3 0.002 +/- 0.013
s2 0.002 +/- 0.003
age -0.002 +/- 0.004
|
analysis_existing.ipynb | ###Markdown
Northbay Living Shoreline Near Shore Wave AnalysisPurpose: Setup a 2D XBeach model utilizing the Python toolbox for the existing conditions of the Northbay living shoreline project Import all of the required packages to run the analysisImport all of the required python packages
###Code
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import os
import sys
# Silence deprecation warnings
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
###Output
_____no_output_____
###Markdown
Import all of the required xbeach tools to create the bathymetric model and
###Code
sys.path.append(os.path.abspath(os.path.join('lib', 'xbeach-toolbox', 'scripts')))
from xbeachtools import xgrid, ygrid, seaward_extend, XBeachModelSetup, offshore_depth, lateral_extend
plt.style.use(os.path.join('lib', 'xbeach-toolbox', 'scripts', 'xb.mplstyle'))
###Output
_____no_output_____
###Markdown
Compile the topographic and bathymetric data
###Code
## load data
bathy = np.loadtxt('./lib/DEM-to-grid/output/bathy_existing.dep')
## set bathy grid
nx = bathy.shape[1]
ny = bathy.shape[0]
dx = 0.5
dy = 0.5
x = np.linspace(0,(nx-1)*dx,nx)
y = np.linspace(0,(ny-1)*dy,ny)
X, Y = np.meshgrid(x,y)
## plot
plt.figure()
plt.pcolor(x,y,bathy)
plt.colorbar()
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.title('bathy')
fig = plt.figure()
ax = Axes3D(fig)
ax.axes.set_zlim3d(-25, 25)
surf = ax.plot_surface(X, Y, bathy, cmap=cm.coolwarm, linewidth=0, antialiased=False)
plt.xlabel('x [m]')
plt.ylabel('y [m]')
###Output
_____no_output_____
###Markdown
Above Figures: bathymetric data heatmap (above) and 3D representation of bathymetry (below) Create x-gridCreate spatially varying x-grid resolution.
###Code
xgr,zgr = xgrid(x, bathy[20,:],dxmin=2)
plt.figure()
plt.plot(x,bathy[20,:],'-o')
plt.plot(xgr,zgr,'.-')
plt.legend(['Bathy','xgr'])
plt.xlabel('x [m]')
plt.ylabel('z [m]')
###Output
_____no_output_____
###Markdown
Create y-grid Create spatially varying y-grid resolution.
###Code
ygr = ygrid(y)
plt.figure()
plt.plot(y[:-1],np.diff(y),'-o')
plt.plot(ygr[:-1],np.diff(ygr),'.-')
plt.legend(['y','ygr'])
plt.xlabel('y [m]')
plt.ylabel('dy [m]')
###Output
_____no_output_____
###Markdown
InterpolateInterpolate data to new grid to be used in running near shore wave modeling analysis
###Code
f = interpolate.interp2d(x, y, bathy, kind='linear')
zgr = f(xgr,ygr)
plt.figure()
plt.pcolor(xgr,ygr,zgr)
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.title('xb bathy')
xgr, ygr = np.meshgrid(xgr,ygr)
###Output
_____no_output_____
###Markdown
Seaward extendExtend the grid to the required offshore depth in order to ensure adequate runup for wave modeling purposes
###Code
d_start, slope, Hm0_shoal = offshore_depth(Hm0=9, Tp=15, depth_offshore_profile=abs(bathy[0,0]), depth_boundary_conditions=10)
xgr, ygr, zgr = seaward_extend(xgr,ygr,zgr,slope=1/150,depth=-2)
plt.figure()
plt.pcolor(xgr,ygr,zgr)
plt.figure()
plt.plot(xgr[:,:].T,zgr[:,:].T)
plt.xlabel('x [m]')
plt.ylabel('z [m]')
###Output
Artificial slope of 1:10
Hm0,shoal = 5.721544477608835
d start = 19.110219776155322
Hm0,shoal/d profile = 4.413733247660724
Hm0,shoal/d slope = 0.2993971050373718
n profile = 0.501075514921838
n slope = 0.7522848806027095
###Markdown
Lateral extendExtend the model laterally to ensure that there is adequate lateral area to run the analysis
###Code
xgr,ygr,zgr = lateral_extend(xgr,ygr,zgr,n=5)
plt.figure()
plt.pcolor(xgr,ygr,zgr)
###Output
_____no_output_____
###Markdown
Create model setup execution
###Code
xb_setup = XBeachModelSetup('Northbay')
print(xb_setup)
###Output
_____no_output_____
###Markdown
Add the grid, wave boundary conditions and parameter to the model
###Code
xb_setup.set_grid(xgr,ygr,zgr)
xb_setup.set_waves('jonstable',{'Hm0':[1.5, 2, 1.5],'Tp':[4, 5, 4],'gammajsp':[3.3, 3.3, 3.3], 's' : [20,20,20], 'mainang':[270,280,290],'duration':[3600, 3600, 3600],'dtbc':[1,1,1]})
xb_setup.set_params({'Wavemodel':'surfbeat',
'morphology':0,
'befriccoef':0.01,
'tstop':3600,
'npointvar':['zb','zs','H'],
'nmeanvar':['zb'],
'npoints':['1 0', '6 0', '10 0', '12 0']})
###Output
_____no_output_____
###Markdown
Write the model setup to be executed
###Code
sim_path = os.path.join('output-2D')
if not os.path.exists(sim_path):
os.mkdir(sim_path)
xb_setup.write_model(os.path.join(sim_path))
###Output
_____no_output_____ |
numba/simple/cityblock-distance-matrix-numba.jit.ipynb | ###Markdown
Using `numba.jit` to speedup the computation of the Cityblock distance matrix In this notebook we implement a function to compute the Cityblock distance matrix using Numba's *just-it-time* compilation decorator. We compare it's performance to that of corresponding non-decorated NumPy function.We will use two Numba functions here. The decorator ` @numba.jit` and `numba.prange`.
###Code
import numpy as np
import numba
def cityblock_python(x, y):
"""Cityblock distance matrix."""
num_samples, num_feat = x.shape
dist_matrix = np.empty((num_samples, num_samples))
for i in range(num_samples):
for j in range(num_samples):
r = 0.0
for k in range(num_feat):
r += np.abs(x[i][k] - y[j][k])
dist_matrix[i][j] = r
return dist_matrix
@numba.jit(nopython=True)
def cityblock_numba1(x, y):
"""Cityblock distance matrix."""
num_samples, num_feat = x.shape
dist_matrix = np.empty((num_samples, num_samples))
for i in range(num_samples):
for j in range(num_samples):
r = 0.0
for k in numba.prange(num_feat):
r += np.abs(x[i][k] - y[j][k])
dist_matrix[i][j] = r
return dist_matrix
@numba.jit(nopython=True)
def cityblock_numba2(x, y):
"""Cityblock distance matrix using `numpy.linalg.norm`
operation.
"""
num_samples, num_feat = x.shape
dist_matrix = np.empty((num_samples, num_samples))
for i in range(num_samples):
for j in numba.prange(num_samples):
dist_matrix[i][j] = np.linalg.norm(x[i] - y[j], 1)
return dist_matrix
###Output
_____no_output_____
###Markdown
NoteObserve that we do the inner loop, which is a reduction, with `numba.prange`. `numba.prange` automatically takes care of data privatization and reductions.
###Code
# Let's check that they all give the same result
a = 10. * np.random.random([100, 10])
print(np.abs(cityblock_python(a, a) - cityblock_numba1(a, a)).max())
print(np.abs(cityblock_python(a, a) - cityblock_numba2(a, a)).max())
nsamples = 200
nfeat = 25
x = 10. * np.random.random([nsamples, nfeat])
%timeit cityblock_python(x,x)
%timeit cityblock_numba1(x, x)
%timeit cityblock_numba2(x, x)
###Output
1.8 s ± 23.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
777 µs ± 2.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
6.39 ms ± 61.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
Using `numba.jit` to speedup the computation of the Cityblock distance matrix In this notebook we implement a function to compute the Cityblock distance matrix using Numba's *just-it-time* compilation decorator. We compare it's performance to that of corresponding non-decorated NumPy function.We will use two Numba functions here. The decorator ` @numba.jit` and `numba.prange`.
###Code
import numpy as np
import numba
def cityblock_python(x, y):
"""Naive python implementation."""
num_samples, num_feat = x.shape
dist_matrix = np.empty((num_samples, num_samples))
for i in range(num_samples):
for j in range(num_samples):
r = 0.0
for k in range(num_feat):
r += np.abs(x[i][k] - y[j][k])
dist_matrix[i][j] = r
return dist_matrix
@numba.jit(nopython=True)
def cityblock_numba1(x, y):
"""Implementation with numba."""
num_samples, num_feat = x.shape
dist_matrix = np.empty((num_samples, num_samples))
for i in range(num_samples):
for j in range(num_samples):
r = 0.0
for k in numba.prange(num_feat):
r += np.abs(x[i][k] - y[j][k])
dist_matrix[i][j] = r
return dist_matrix
@numba.jit(nopython=True)
def cityblock_numba2(x, y):
"""Implementation with numba and numpy."""
num_samples, num_feat = x.shape
dist_matrix = np.empty((num_samples, num_samples))
for i in range(num_samples):
for j in numba.prange(num_samples):
dist_matrix[i][j] = np.linalg.norm(x[i] - y[j], 1)
return dist_matrix
###Output
_____no_output_____
###Markdown
NoteObserve that the inner loop, which is a reduction, is done with `numba.prange`. `numba.prange` automatically takes care of data privatization and reductions.
###Code
# Let's check that they all give the same result
a = 10. * np.random.random([100, 10])
print(np.abs(cityblock_python(a, a) - cityblock_numba1(a, a)).max())
print(np.abs(cityblock_python(a, a) - cityblock_numba2(a, a)).max())
nsamples = 200
nfeat = 25
x = 10. * np.random.random([nsamples, nfeat])
%timeit cityblock_python(x,x)
%timeit cityblock_numba1(x, x)
%timeit cityblock_numba2(x, x)
###Output
_____no_output_____ |
1-intro-to-computer-vision/activities/5-cnn-layers-and-feature-visualization/4. Classify FashionMNIST, solution 1.ipynb | ###Markdown
CNN for Classification---In this and the next notebook, we define **and train** a CNN to classify images from the [Fashion-MNIST database](https://github.com/zalandoresearch/fashion-mnist). We are providing two solutions to show you how different network structures and training strategies can affect the performance and accuracy of a CNN. This first solution will be a simple CNN with two convolutional layers. Please note that this is just one possible solution out of many! Load the [data](https://pytorch.org/docs/stable/torchvision/datasets.html)In this cell, we load in both **training and test** datasets from the FashionMNIST class.
###Code
# our basic libraries
import torch
import torchvision
# data loading and transforming
from torchvision.datasets import FashionMNIST
from torch.utils.data import DataLoader
from torchvision import transforms
# The output of torchvision datasets are PILImage images of range [0, 1].
# We transform them to Tensors for input into a CNN
## Define a transform to read the data in as a tensor
data_transform = transforms.ToTensor()
# choose the training and test datasets
train_data = FashionMNIST(root='./data', train=True,
download=True, transform=data_transform)
test_data = FashionMNIST(root='./data', train=False,
download=True, transform=data_transform)
# Print out some stats about the training and test data
print('Train data, number of images: ', len(train_data))
print('Test data, number of images: ', len(test_data))
# prepare data loaders, set the batch_size
## TODO: you can try changing the batch_size to be larger or smaller
## when you get to training your network, see how batch_size affects the loss
batch_size = 20
train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True)
# specify the image classes
classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
Visualize some training dataThis cell iterates over the training dataset, loading a random batch of image/label data, using `dataiter.next()`. It then plots the batch of images and labels in a `2 x batch_size/2` grid.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(batch_size):
ax = fig.add_subplot(2, batch_size/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
Define the network architectureThe various layers that make up any neural network are documented, [here](https://pytorch.org/docs/stable/nn.html). For a convolutional neural network, we'll use a simple series of layers:* Convolutional layers* Maxpooling layers* Fully-connected (linear) layersYou are also encouraged to look at adding [dropout layers](https://pytorch.org/docs/stable/nn.htmldropout) to avoid overfitting this data.---To define a neural network in PyTorch, you define the layers of a model in the function `__init__` and define the feedforward behavior of a network that employs those initialized layers in the function `forward`, which takes in an input image tensor, `x`. The structure of this Net class is shown below and left for you to fill in.Note: During training, PyTorch will be able to perform backpropagation by keeping track of the network's feedforward behavior and using autograd to calculate the update to the weights in the network. Define the Layers in ` __init__`As a reminder, a conv/pool layer may be defined like this (in `__init__`):``` 1 input image channel (for grayscale images), 32 output channels/feature maps, 3x3 square convolution kernelself.conv1 = nn.Conv2d(1, 32, 3) maxpool that uses a square window of kernel_size=2, stride=2self.pool = nn.MaxPool2d(2, 2) ``` Refer to Layers in `forward`Then referred to in the `forward` function like this, in which the conv1 layer has a ReLu activation applied to it before maxpooling is applied:```x = self.pool(F.relu(self.conv1(x)))```You must place any layers with trainable weights, such as convolutional layers, in the `__init__` function and refer to them in the `forward` function; any layers or functions that always behave in the same way, such as a pre-defined activation function, may appear *only* in the `forward` function. In practice, you'll often see conv/pool layers defined in `__init__` and activations defined in `forward`. Convolutional layerThe first convolution layer has been defined for you, it takes in a 1 channel (grayscale) image and outputs 10 feature maps as output, after convolving the image with 3x3 filters. FlatteningRecall that to move from the output of a convolutional/pooling layer to a linear layer, you must first flatten your extracted features into a vector. If you've used the deep learning library, Keras, you may have seen this done by `Flatten()`, and in PyTorch you can flatten an input `x` with `x = x.view(x.size(0), -1)`. TODO: Define the rest of the layersIt will be up to you to define the other layers in this network; we have some recommendations, but you may change the architecture and parameters as you see fit.Recommendations/tips:* Use at least two convolutional layers* Your output must be a linear layer with 10 outputs (for the 10 classes of clothing)* Use a dropout layer to avoid overfitting A note on output sizeFor any convolutional layer, the output feature maps will have the specified depth (a depth of 10 for 10 filters in a convolutional layer) and the dimensions of the produced feature maps (width/height) can be computed as the _input image_ width/height, W, minus the filter size, F, divided by the stride, S, all + 1. The equation looks like: `output_dim = (W-F)/S + 1`, for an assumed padding size of 0. You can find a derivation of this formula, [here](http://cs231n.github.io/convolutional-networks/conv).For a pool layer with a size 2 and stride 2, the output dimension will be reduced by a factor of 2. Read the comments in the code below to see the output size for each layer.
###Code
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel (grayscale), 10 output channels/feature maps
# 3x3 square convolution kernel
## output size = (W-F)/S +1 = (28-3)/1 +1 = 26
# the output Tensor for one image, will have the dimensions: (10, 26, 26)
# after one pool layer, this becomes (10, 13, 13)
self.conv1 = nn.Conv2d(1, 10, 3)
# maxpool layer
# pool with kernel_size=2, stride=2
self.pool = nn.MaxPool2d(2, 2)
# second conv layer: 10 inputs, 20 outputs, 3x3 conv
## output size = (W-F)/S +1 = (13-3)/1 +1 = 11
# the output tensor will have dimensions: (20, 11, 11)
# after another pool layer this becomes (20, 5, 5); 5.5 is rounded down
self.conv2 = nn.Conv2d(10, 20, 3)
# 20 outputs * the 5*5 filtered/pooled map size
# 10 output channels (for the 10 classes)
self.fc1 = nn.Linear(20*5*5, 10)
# define the feedforward behavior
def forward(self, x):
# two conv/relu + pool layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
# prep for linear layer
# flatten the inputs into a vector
x = x.view(x.size(0), -1)
# one linear layer
x = F.relu(self.fc1(x))
# a softmax layer to convert the 10 outputs into a distribution of class scores
x = F.log_softmax(x, dim=1)
# final output
return x
# instantiate and print your Net
net = Net()
print(net)
###Output
Net(
(conv1): Conv2d(1, 10, kernel_size=(3, 3), stride=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(10, 20, kernel_size=(3, 3), stride=(1, 1))
(fc1): Linear(in_features=500, out_features=10, bias=True)
)
###Markdown
TODO: Specify the loss function and optimizerLearn more about [loss functions](https://pytorch.org/docs/stable/nn.htmlloss-functions) and [optimizers](https://pytorch.org/docs/stable/optim.html) in the online documentation.Note that for a classification problem like this, one typically uses cross entropy loss, which can be defined in code like: `criterion = nn.CrossEntropyLoss()`; cross entropy loss combines `softmax` and `NLL loss` so, alternatively (as in this example), you may see NLL Loss being used when the output of our Net is a distribution of class scores. PyTorch also includes some standard stochastic optimizers like stochastic gradient descent and Adam. You're encouraged to try different optimizers and see how your model responds to these choices as it trains.
###Code
import torch.optim as optim
## TODO: specify loss function
# cross entropy loss combines softmax and nn.NLLLoss() in one single class.
criterion = nn.NLLLoss()
## TODO: specify optimizer
# stochastic gradient descent with a small learning rate
optimizer = optim.SGD(net.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
A note on accuracyIt's interesting to look at the accuracy of your network **before and after** training. This way you can really see that your network has learned something. In the next cell, let's see what the accuracy of an untrained network is (we expect it to be around 10% which is the same accuracy as just guessing for all 10 classes).
###Code
# Calculate accuracy before training
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
# forward pass to get outputs
# the outputs are a series of class scores
outputs = net(images)
# get the predicted class from the maximum value in the output-list of class scores
_, predicted = torch.max(outputs.data, 1)
# count up total number of correct labels
# for which the predicted and true labels are equal
total += labels.size(0)
correct += (predicted == labels).sum()
# calculate the accuracy
# to convert `correct` from a Tensor into a scalar, use .item()
accuracy = 100.0 * correct.item() / total
# print it out!
print('Accuracy before training: ', accuracy)
###Output
Accuracy before training: 9.93
###Markdown
Train the NetworkBelow, we've defined a `train` function that takes in a number of epochs to train for. * The number of epochs is how many times a network will cycle through the entire training dataset. * Inside the epoch loop, we loop over the training dataset in batches; recording the loss every 1000 batches.Here are the steps that this training function performs as it iterates over the training dataset:1. Zero's the gradients to prepare for a forward pass2. Passes the input through the network (forward pass)3. Computes the loss (how far is the predicted classes are from the correct labels)4. Propagates gradients back into the network’s parameters (backward pass)5. Updates the weights (parameter update)6. Prints out the calculated loss
###Code
def train(n_epochs):
loss_over_time = [] # to track the loss as the network trains
for epoch in range(n_epochs): # loop over the dataset multiple times
running_loss = 0.0
for batch_i, data in enumerate(train_loader):
# get the input images and their corresponding labels
inputs, labels = data
# zero the parameter (weight) gradients
optimizer.zero_grad()
# forward pass to get outputs
outputs = net(inputs)
# calculate the loss
loss = criterion(outputs, labels)
# backward pass to calculate the parameter gradients
loss.backward()
# update the parameters
optimizer.step()
# print loss statistics
# to convert loss into a scalar and add it to running_loss, we use .item()
running_loss += loss.item()
if batch_i % 1000 == 999: # print every 1000 batches
avg_loss = running_loss/1000
# record and print the avg loss over the 1000 batches
loss_over_time.append(avg_loss)
print('Epoch: {}, Batch: {}, Avg. Loss: {}'.format(epoch + 1, batch_i+1, avg_loss))
running_loss = 0.0
print('Finished Training')
return loss_over_time
# define the number of epochs to train for
n_epochs = 30 # start small to see if your model works, initially
# call train and record the loss over time
training_loss = train(n_epochs)
###Output
Epoch: 1, Batch: 1000, Avg. Loss: 2.280197047948837
Epoch: 1, Batch: 2000, Avg. Loss: 2.1881805083751678
Epoch: 1, Batch: 3000, Avg. Loss: 2.0305942301750184
Epoch: 2, Batch: 1000, Avg. Loss: 1.8912174297571183
Epoch: 2, Batch: 2000, Avg. Loss: 1.7781494530439377
Epoch: 2, Batch: 3000, Avg. Loss: 1.700443705201149
Epoch: 3, Batch: 1000, Avg. Loss: 1.6538931933641434
Epoch: 3, Batch: 2000, Avg. Loss: 1.6182195357084275
Epoch: 3, Batch: 3000, Avg. Loss: 1.5871113073825835
Epoch: 4, Batch: 1000, Avg. Loss: 1.571334638774395
Epoch: 4, Batch: 2000, Avg. Loss: 1.558837603390217
Epoch: 4, Batch: 3000, Avg. Loss: 1.5352823460698128
Epoch: 5, Batch: 1000, Avg. Loss: 1.5245767230987548
Epoch: 5, Batch: 2000, Avg. Loss: 1.5258621676564217
Epoch: 5, Batch: 3000, Avg. Loss: 1.500036157488823
Epoch: 6, Batch: 1000, Avg. Loss: 1.3859390335083008
Epoch: 6, Batch: 2000, Avg. Loss: 1.3059106462001802
Epoch: 6, Batch: 3000, Avg. Loss: 1.296736326932907
Epoch: 7, Batch: 1000, Avg. Loss: 1.2898800143897533
Epoch: 7, Batch: 2000, Avg. Loss: 1.2803966630399226
Epoch: 7, Batch: 3000, Avg. Loss: 1.2703334674835205
Epoch: 8, Batch: 1000, Avg. Loss: 1.2602456424832345
Epoch: 8, Batch: 2000, Avg. Loss: 1.2535528672337533
Epoch: 8, Batch: 3000, Avg. Loss: 1.259417534351349
Epoch: 9, Batch: 1000, Avg. Loss: 1.24847115072608
Epoch: 9, Batch: 2000, Avg. Loss: 1.232547440737486
Epoch: 9, Batch: 3000, Avg. Loss: 1.2352095106840133
Epoch: 10, Batch: 1000, Avg. Loss: 1.23017308062315
Epoch: 10, Batch: 2000, Avg. Loss: 1.222173468708992
Epoch: 10, Batch: 3000, Avg. Loss: 1.2068115211725234
Epoch: 11, Batch: 1000, Avg. Loss: 1.2126260179281234
Epoch: 11, Batch: 2000, Avg. Loss: 1.201838692188263
Epoch: 11, Batch: 3000, Avg. Loss: 1.199593785494566
Epoch: 12, Batch: 1000, Avg. Loss: 1.1992131185531616
Epoch: 12, Batch: 2000, Avg. Loss: 1.1842407554984093
Epoch: 12, Batch: 3000, Avg. Loss: 1.1891167818605899
Epoch: 13, Batch: 1000, Avg. Loss: 1.18865768802166
Epoch: 13, Batch: 2000, Avg. Loss: 1.1762889119386672
Epoch: 13, Batch: 3000, Avg. Loss: 1.173438048005104
Epoch: 14, Batch: 1000, Avg. Loss: 1.1709098608195783
Epoch: 14, Batch: 2000, Avg. Loss: 1.1712206808924674
Epoch: 14, Batch: 3000, Avg. Loss: 1.1643793394565582
Epoch: 15, Batch: 1000, Avg. Loss: 1.1628621336519718
Epoch: 15, Batch: 2000, Avg. Loss: 1.1555464325845242
Epoch: 15, Batch: 3000, Avg. Loss: 1.1616584394276142
Epoch: 16, Batch: 1000, Avg. Loss: 1.1508833450675011
Epoch: 16, Batch: 2000, Avg. Loss: 1.157491392761469
Epoch: 16, Batch: 3000, Avg. Loss: 1.1457767978608608
Epoch: 17, Batch: 1000, Avg. Loss: 1.1505050802826882
Epoch: 17, Batch: 2000, Avg. Loss: 1.1356983349621297
Epoch: 17, Batch: 3000, Avg. Loss: 1.1449345200359822
Epoch: 18, Batch: 1000, Avg. Loss: 1.136381413757801
Epoch: 18, Batch: 2000, Avg. Loss: 1.1390522135794163
Epoch: 18, Batch: 3000, Avg. Loss: 1.1342694403529168
Epoch: 19, Batch: 1000, Avg. Loss: 1.125833790153265
Epoch: 19, Batch: 2000, Avg. Loss: 1.1413091076910495
Epoch: 19, Batch: 3000, Avg. Loss: 1.1256828525066376
Epoch: 20, Batch: 1000, Avg. Loss: 1.131493739336729
Epoch: 20, Batch: 2000, Avg. Loss: 1.1252183372080327
Epoch: 20, Batch: 3000, Avg. Loss: 1.1201562556624411
Epoch: 21, Batch: 1000, Avg. Loss: 1.1071451597809792
Epoch: 21, Batch: 2000, Avg. Loss: 1.120999905705452
Epoch: 21, Batch: 3000, Avg. Loss: 1.1322092941999435
Epoch: 22, Batch: 1000, Avg. Loss: 1.1204465953111649
Epoch: 22, Batch: 2000, Avg. Loss: 1.1183336389064789
Epoch: 22, Batch: 3000, Avg. Loss: 1.1078847616314889
Epoch: 23, Batch: 1000, Avg. Loss: 1.1171914002299308
Epoch: 23, Batch: 2000, Avg. Loss: 1.1045918381214141
Epoch: 23, Batch: 3000, Avg. Loss: 1.112335798561573
Epoch: 24, Batch: 1000, Avg. Loss: 1.1184288013726473
Epoch: 24, Batch: 2000, Avg. Loss: 1.1074971999228
Epoch: 24, Batch: 3000, Avg. Loss: 1.0941183556616307
Epoch: 25, Batch: 1000, Avg. Loss: 1.0311833134889603
Epoch: 25, Batch: 2000, Avg. Loss: 0.926190541267395
Epoch: 25, Batch: 3000, Avg. Loss: 0.9271546306312084
Epoch: 26, Batch: 1000, Avg. Loss: 0.9197074555754662
Epoch: 26, Batch: 2000, Avg. Loss: 0.9222738272845745
Epoch: 26, Batch: 3000, Avg. Loss: 0.9095738008618355
Epoch: 27, Batch: 1000, Avg. Loss: 0.9205604720264673
Epoch: 27, Batch: 2000, Avg. Loss: 0.9153134944736958
Epoch: 27, Batch: 3000, Avg. Loss: 0.8910749333202839
Epoch: 28, Batch: 1000, Avg. Loss: 0.9015810181647539
Epoch: 28, Batch: 2000, Avg. Loss: 0.9067668014913798
Epoch: 28, Batch: 3000, Avg. Loss: 0.9002675889879466
Epoch: 29, Batch: 1000, Avg. Loss: 0.8955601705908776
Epoch: 29, Batch: 2000, Avg. Loss: 0.9001211318671704
Epoch: 29, Batch: 3000, Avg. Loss: 0.8961946932375431
Epoch: 30, Batch: 1000, Avg. Loss: 0.8917670692801476
Epoch: 30, Batch: 2000, Avg. Loss: 0.8967054404765368
Epoch: 30, Batch: 3000, Avg. Loss: 0.889199899584055
Finished Training
###Markdown
Visualizing the lossA good indication of how much your network is learning as it trains is the loss over time. In this example, we printed and recorded the average loss for each 1000 batches and for each epoch. Let's plot it and see how the loss decreases (or doesn't) over time.In this case, you can see that it takes a little bit for a big initial loss decrease, and the loss is flattening out over time.
###Code
# visualize the loss as the network trained
plt.plot(training_loss)
plt.xlabel('1000\'s of batches')
plt.ylabel('loss')
plt.ylim(0, 2.5) # consistent scale
plt.show()
###Output
_____no_output_____
###Markdown
Test the Trained NetworkOnce you are satisfied with how the loss of your model has decreased, there is one last step: test!You must test your trained model on a previously unseen dataset to see if it generalizes well and can accurately classify this new dataset. For FashionMNIST, which contains many pre-processed training images, a good model should reach **greater than 85% accuracy** on this test dataset. If you are not reaching this value, try training for a larger number of epochs, tweaking your hyperparameters, or adding/subtracting layers from your CNN.
###Code
# initialize tensor and lists to monitor test loss and accuracy
test_loss = torch.zeros(1)
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
# set the module to evaluation mode
net.eval()
for batch_i, data in enumerate(test_loader):
# get the input images and their corresponding labels
inputs, labels = data
# forward pass to get outputs
outputs = net(inputs)
# calculate the loss
loss = criterion(outputs, labels)
# update average test loss
test_loss = test_loss + ((torch.ones(1) / (batch_i + 1)) * (loss.data - test_loss))
# get the predicted class from the maximum value in the output-list of class scores
_, predicted = torch.max(outputs.data, 1)
# compare predictions to true label
# this creates a `correct` Tensor that holds the number of correctly classified images in a batch
correct = np.squeeze(predicted.eq(labels.data.view_as(predicted)))
# calculate test accuracy for *each* object class
# we get the scalar value of correct items for a class, by calling `correct[i].item()`
for i in range(batch_size):
label = labels.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
print('Test Loss: {:.6f}\n'.format(test_loss.numpy()[0]))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.922794
Test Accuracy of T-shirt/top: 75% (754/1000)
Test Accuracy of Trouser: 0% ( 0/1000)
Test Accuracy of Pullover: 84% (842/1000)
Test Accuracy of Dress: 86% (861/1000)
Test Accuracy of Coat: 67% (675/1000)
Test Accuracy of Sandal: 94% (942/1000)
Test Accuracy of Shirt: 49% (491/1000)
Test Accuracy of Sneaker: 97% (974/1000)
Test Accuracy of Bag: 94% (945/1000)
Test Accuracy of Ankle boot: 72% (720/1000)
Test Accuracy (Overall): 72% (7204/10000)
###Markdown
Visualize sample test resultsFormat: predicted class (true class)
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get predictions
preds = np.squeeze(net(images).data.max(1, keepdim=True)[1].numpy())
images = images.numpy()
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(batch_size):
ax = fig.add_subplot(2, batch_size/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx] else "red"))
###Output
_____no_output_____
###Markdown
Question: What are some weaknesses of your model? (And how might you improve these in future iterations.) **Answer**: This model performs well on everything but shirts and pullovers (0% accuracy); it looks like this incorrectly classifies most of those as a coat which has a similar overall shape. Because it performs well on everything but these two classes, I suspect this model is overfitting certain classes at the cost of generalization. I suspect that this accuracy could be improved by adding some dropout layers to aoid overfitting.
###Code
# Saving the model
model_dir = 'saved_models/'
model_name = 'fashion_net_simple.pt'
# after training, save your model parameters in the dir 'saved_models'
# when you're ready, un-comment the line below
torch.save(net.state_dict(), model_dir+model_name)
###Output
_____no_output_____ |
model-list-scrape.ipynb | ###Markdown
Scraping the List of Auto Models
###Code
from bs4 import BeautifulSoup # Parsing the web page
import pandas as pd
HtmlFile = open("data/ScoutBody.html", 'r', encoding='utf-8')
source_code = HtmlFile.read()
soup = BeautifulSoup(source_code,'html.parser')
type(soup)
soup.title
div = soup.find(class_ = 'cl-classified-list-container')
script = div.find('script')
df = pd.DataFrame(script.text.split("{"))
df = df[df[0].str.find('"isModel" : true') != -1]
models = df[0].str.extract(r'("name" : )(\".*?\")')[1]
models
###Output
_____no_output_____ |
unsupervised-deeplearning/notebooks/CollabortiveFilteringUsingRBM.ipynb | ###Markdown
RECOMMENDATION SYSTEM WITH A RESTRICTED BOLTZMANN MACHINE Welcome to the Recommendation System with a Restricted Boltzmann Machine notebook. In this notebook, we study and go over the usage of a Restricted Boltzmann Machine (RBM) in a Collaborative Filtering based recommendation system. This system is an algorithm that recommends items by trying to find users that are similar to each other based on their item ratings. By the end of this notebook, you should have a deeper understanding of how Restricted Boltzmann Machines are applied, and how to build one using TensorFlow. Table of Contents Acquiring the Data Loading in the Data The Restricted Boltzmann Machine model Setting the Model's Parameters Recommendation Acquiring the Data To start, we need to download the data we are going to use for our system. The datasets we are going to use were acquired by GroupLens and contain movies, users and movie ratings by these users.After downloading the data, we will extract the datasets to a directory that is easily accessible.
###Code
!wget -c https://raw.githubusercontent.com/fawazsiddiqi/recommendation-system-with-a-Restricted-Boltzmann-Machine-using-tensorflow/master/data/ml-1m.zip -O moviedataset.zip
!unzip -o moviedataset.zip
###Output
_____no_output_____
###Markdown
With the datasets in place, let's now import the necessary libraries. We will be using Tensorflow and Numpy together to model and initialize our Restricted Boltzmann Machine and Pandas to manipulate our datasets. To import these libraries, run the code cell below.
###Code
#Tensorflow library. Used to implement machine learning models
import tensorflow as tf
#Numpy contains helpful functions for efficient mathematical calculations
import numpy as np
#Dataframe manipulation library
import pandas as pd
#Graph plotting library
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Loading in the DataLet's begin by loading in our data with Pandas. The .dat files containing our data are similar to CSV files, but instead of using the ',' (comma) character to separate entries, it uses '::' (two colons) characters instead. To let Pandas know that it should separate data points at every '::', we have to specify the sep='::' parameter when calling the function.Additionally, we also pass it the header=None parameter due to the fact that our files don't contain any headers.Let's start with the movies.dat file and take a look at its structure:
###Code
#Loading in the movies dataset
movies_df = pd.read_csv('ml-1m/movies.dat', sep='::', header=None, engine='python')
movies_df.head()
###Output
_____no_output_____
###Markdown
We can do the same for the ratings.dat file:
###Code
#Loading in the ratings dataset
ratings_df = pd.read_csv('ml-1m/ratings.dat', sep='::', header=None, engine='python')
ratings_df.head()
###Output
_____no_output_____
###Markdown
So our movies_df variable contains a dataframe that stores a movie's unique ID number, title and genres, while our ratings_df variable stores a unique User ID number, a movie's ID that the user has watched, the user's rating to said movie and when the user rated that movie.Let's now rename the columns in these dataframes so we can better convey their data more intuitively:
###Code
movies_df.columns = ['MovieID', 'Title', 'Genres']
movies_df.head()
###Output
_____no_output_____
###Markdown
And our final ratings_df:
###Code
ratings_df.columns = ['UserID', 'MovieID', 'Rating', 'Timestamp']
ratings_df.head()
###Output
_____no_output_____
###Markdown
The Restricted Boltzmann Machine model The Restricted Boltzmann Machine model has two layers of neurons, one of which is what we call a visible input layer and the other is called a hidden layer. The hidden layer is used to learn features from the information fed through the input layer. For our model, the input is going to contain X neurons, where X is the amount of movies in our dataset. Each of these neurons will possess a normalized rating value varying from 0 to 1, where 0 meaning that a user has not watched that movie and the closer the value is to 1, the more the user likes the movie that neuron's representing. These normalized values, of course, will be extracted and normalized from the ratings dataset.After passing in the input, we train the RBM on it and have the hidden layer learn its features. These features are what we use to reconstruct the input, which in our case, will predict the ratings for movies that user hasn't watched, which is exactly what we can use to recommend movies!We will now begin to format our dataset to follow the model's expected input. Formatting the Data First let's see how many movies we have and see if the movie ID's correspond with that value:
###Code
len(movies_df)
###Output
_____no_output_____
###Markdown
Now, we can start formatting the data into input for the RBM. We're going to store the normalized users ratings into as a matrix of user-rating called trX, and normalize the values.
###Code
user_rating_df = ratings_df.pivot(index='UserID', columns='MovieID', values='Rating')
user_rating_df.head()
###Output
_____no_output_____
###Markdown
Lets normalize it now:
###Code
norm_user_rating_df = user_rating_df.fillna(0) / 5.0
trX = norm_user_rating_df.values
trX[0:5]
###Output
_____no_output_____
###Markdown
Setting the Model's Parameters Next, let's start building our RBM with TensorFlow. We'll begin by first determining the number of neurons in the hidden layers and then creating placeholder variables for storing our visible layer biases, hidden layer biases and weights that connects the hidden layer with the visible layer. We will be arbitrarily setting the number of neurons in the hidden layers to 20. You can freely set this value to any number you want since each neuron in the hidden layer will end up learning a feature.
###Code
hiddenUnits = 20
visibleUnits = len(user_rating_df.columns)
vb = tf.Variable(tf.zeros([visibleUnits]), tf.float32) #Number of unique movies
hb = tf.Variable(tf.zeros([hiddenUnits]), tf.float32) #Number of features we're going to learn
W = tf.Variable(tf.zeros([visibleUnits, hiddenUnits]), tf.float32)
###Output
_____no_output_____
###Markdown
We then move on to creating the visible and hidden layer units and setting their activation functions. In this case, we will be using the tf.sigmoid and tf.relu functions as nonlinear activations since it is commonly used in RBM's.
###Code
v0 = tf.zeros([visibleUnits], tf.float32)
#testing to see if the matrix product works
tf.matmul([v0], W)
#Phase 1: Input Processing
#defining a function to return only the generated hidden states
def hidden_layer(v0_state, W, hb):
h0_prob = tf.nn.sigmoid(tf.matmul([v0_state], W) + hb) #probabilities of the hidden units
h0_state = tf.nn.relu(tf.sign(h0_prob - tf.random.uniform(tf.shape(h0_prob)))) #sample_h_given_X
return h0_state
#printing output of zeros input
h0 = hidden_layer(v0, W, hb)
print("first 15 hidden states: ", h0[0][0:15])
def reconstructed_output(h0_state, W, vb):
v1_prob = tf.nn.sigmoid(tf.matmul(h0_state, tf.transpose(W)) + vb)
v1_state = tf.nn.relu(tf.sign(v1_prob - tf.random.uniform(tf.shape(v1_prob)))) #sample_v_given_h
return v1_state[0]
v1 = reconstructed_output(h0, W, vb)
print("hidden state shape: ", h0.shape)
print("v0 state shape: ", v0.shape)
print("v1 state shape: ", v1.shape)
###Output
_____no_output_____
###Markdown
And set the error function, which in this case will be the Mean Absolute Error Function.
###Code
def error(v0_state, v1_state):
return tf.reduce_mean(tf.square(v0_state - v1_state))
err = tf.reduce_mean(tf.square(v0 - v1))
print("error" , err.numpy())
###Output
_____no_output_____
###Markdown
Now we train the RBM with 5 epochs with each epoch using a batchsize of 500, giving 12 batches. After training, we print out a graph with the error by epoch.
###Code
epochs = 5
batchsize = 500
errors = []
weights = []
K=1
alpha = 0.1
#creating datasets
train_ds = \
tf.data.Dataset.from_tensor_slices((np.float32(trX))).batch(batchsize)
#for i in range(epochs):
# for start, end in zip( range(0, len(trX), batchsize), range(batchsize, len(trX), batchsize)):
# batch = trX[start:end]
# cur_w = sess.run(update_w, feed_dict={v0: batch, W: prv_w, vb: prv_vb, hb: prv_hb})
# cur_vb = sess.run(update_vb, feed_dict={v0: batch, W: prv_w, vb: prv_vb, hb: prv_hb})
# cur_nb = sess.run(update_hb, feed_dict={v0: batch, W: prv_w, vb: prv_vb, hb: prv_hb})
# prv_w = cur_w
# prv_vb = cur_vb
# prv_hb = cur_hb
# errors.append(sess.run(err_sum, feed_dict={v0: trX, W: cur_w, vb: cur_vb, hb: cur_hb}))
# print (errors[-1])
v0_state=v0
for epoch in range(epochs):
batch_number = 0
for batch_x in train_ds:
for i_sample in range(len(batch_x)):
for k in range(K):
v0_state = batch_x[i_sample]
h0_state = hidden_layer(v0_state, W, hb)
v1_state = reconstructed_output(h0_state, W, vb)
h1_state = hidden_layer(v1_state, W, hb)
delta_W = tf.matmul(tf.transpose([v0_state]), h0_state) - tf.matmul(tf.transpose([v1_state]), h1_state)
W = W + alpha * delta_W
vb = vb + alpha * tf.reduce_mean(v0_state - v1_state, 0)
hb = hb + alpha * tf.reduce_mean(h0_state - h1_state, 0)
v0_state = v1_state
if i_sample == len(batch_x)-1:
err = error(batch_x[i_sample], v1_state)
errors.append(err)
weights.append(W)
print ( 'Epoch: %d' % (epoch + 1),
"batch #: %i " % batch_number, "of %i" % (len(trX)/batchsize),
"sample #: %i" % i_sample,
'reconstruction error: %f' % err)
batch_number += 1
plt.plot(errors)
plt.ylabel('Error')
plt.xlabel('Epoch')
plt.show()
###Output
_____no_output_____
###Markdown
Recommendation We can now predict movies that an arbitrarily selected user might like. This can be accomplished by feeding in the user's watched movie preferences into the RBM and then reconstructing the input. The values that the RBM gives us will attempt to estimate the user's preferences for movies that he hasn't watched based on the preferences of the users that the RBM was trained on. Lets first select a User ID of our mock user:
###Code
mock_user_id = 215
#Selecting the input user
inputUser = trX[mock_user_id-1].reshape(1, -1)
inputUser = tf.convert_to_tensor(trX[mock_user_id-1],"float32")
v0 = inputUser
print(v0)
v0.shape
v0test = tf.zeros([visibleUnits], tf.float32)
v0test.shape
#Feeding in the user and reconstructing the input
hh0 = tf.nn.sigmoid(tf.matmul([v0], W) + hb)
vv1 = tf.nn.sigmoid(tf.matmul(hh0, tf.transpose(W)) + vb)
rec = vv1
tf.maximum(rec,1)
for i in vv1:
print(i)
###Output
_____no_output_____
###Markdown
We can then list the 20 most recommended movies for our mock user by sorting it by their scores given by our model.
###Code
scored_movies_df_mock = movies_df[movies_df['MovieID'].isin(user_rating_df.columns)]
scored_movies_df_mock = scored_movies_df_mock.assign(RecommendationScore = rec[0])
scored_movies_df_mock.sort_values(["RecommendationScore"], ascending=False).head(20)
###Output
_____no_output_____
###Markdown
RECOMMENDATION SYSTEM WITH A RESTRICTED BOLTZMANN MACHINE Welcome to the Recommendation System with a Restricted Boltzmann Machine notebook. In this notebook, we study and go over the usage of a Restricted Boltzmann Machine (RBM) in a Collaborative Filtering based recommendation system. This system is an algorithm that recommends items by trying to find users that are similar to each other based on their item ratings. By the end of this notebook, you should have a deeper understanding of how Restricted Boltzmann Machines are applied, and how to build one using TensorFlow. Table of Contents Acquiring the Data Loading in the Data The Restricted Boltzmann Machine model Setting the Model's Parameters Recommendation Acquiring the Data To start, we need to download the data we are going to use for our system. The datasets we are going to use were acquired by GroupLens and contain movies, users and movie ratings by these users.After downloading the data, we will extract the datasets to a directory that is easily accessible.
###Code
!wget -c https://raw.githubusercontent.com/IBM/dl-learning-path-assets/main/unsupervised-deeplearning/data/ml-1m.zip -O moviedataset.zip
!unzip -o moviedataset.zip
###Output
_____no_output_____
###Markdown
With the datasets in place, let's now import the necessary libraries. We will be using Tensorflow and Numpy together to model and initialize our Restricted Boltzmann Machine and Pandas to manipulate our datasets. To import these libraries, run the code cell below.
###Code
#Tensorflow library. Used to implement machine learning models
import tensorflow as tf
#Numpy contains helpful functions for efficient mathematical calculations
import numpy as np
#Dataframe manipulation library
import pandas as pd
#Graph plotting library
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Loading in the DataLet's begin by loading in our data with Pandas. The .dat files containing our data are similar to CSV files, but instead of using the ',' (comma) character to separate entries, it uses '::' (two colons) characters instead. To let Pandas know that it should separate data points at every '::', we have to specify the sep='::' parameter when calling the function.Additionally, we also pass it the header=None parameter due to the fact that our files don't contain any headers.Let's start with the movies.dat file and take a look at its structure:
###Code
#Loading in the movies dataset
movies_df = pd.read_csv('ml-1m/movies.dat', sep='::', header=None, engine='python')
movies_df.head()
###Output
_____no_output_____
###Markdown
We can do the same for the ratings.dat file:
###Code
#Loading in the ratings dataset
ratings_df = pd.read_csv('ml-1m/ratings.dat', sep='::', header=None, engine='python')
ratings_df.head()
###Output
_____no_output_____
###Markdown
So our movies_df variable contains a dataframe that stores a movie's unique ID number, title and genres, while our ratings_df variable stores a unique User ID number, a movie's ID that the user has watched, the user's rating to said movie and when the user rated that movie.Let's now rename the columns in these dataframes so we can better convey their data more intuitively:
###Code
movies_df.columns = ['MovieID', 'Title', 'Genres']
movies_df.head()
###Output
_____no_output_____
###Markdown
And our final ratings_df:
###Code
ratings_df.columns = ['UserID', 'MovieID', 'Rating', 'Timestamp']
ratings_df.head()
###Output
_____no_output_____
###Markdown
The Restricted Boltzmann Machine model The Restricted Boltzmann Machine model has two layers of neurons, one of which is what we call a visible input layer and the other is called a hidden layer. The hidden layer is used to learn features from the information fed through the input layer. For our model, the input is going to contain X neurons, where X is the amount of movies in our dataset. Each of these neurons will possess a normalized rating value varying from 0 to 1, where 0 meaning that a user has not watched that movie and the closer the value is to 1, the more the user likes the movie that neuron's representing. These normalized values, of course, will be extracted and normalized from the ratings dataset.After passing in the input, we train the RBM on it and have the hidden layer learn its features. These features are what we use to reconstruct the input, which in our case, will predict the ratings for movies that user hasn't watched, which is exactly what we can use to recommend movies!We will now begin to format our dataset to follow the model's expected input. Formatting the Data First let's see how many movies we have and see if the movie ID's correspond with that value:
###Code
len(movies_df)
###Output
_____no_output_____
###Markdown
Now, we can start formatting the data into input for the RBM. We're going to store the normalized users ratings into as a matrix of user-rating called trX, and normalize the values.
###Code
user_rating_df = ratings_df.pivot(index='UserID', columns='MovieID', values='Rating')
user_rating_df.head()
###Output
_____no_output_____
###Markdown
Lets normalize it now:
###Code
norm_user_rating_df = user_rating_df.fillna(0) / 5.0
trX = norm_user_rating_df.values
trX[0:5]
###Output
_____no_output_____
###Markdown
Setting the Model's Parameters Next, let's start building our RBM with TensorFlow. We'll begin by first determining the number of neurons in the hidden layers and then creating placeholder variables for storing our visible layer biases, hidden layer biases and weights that connects the hidden layer with the visible layer. We will be arbitrarily setting the number of neurons in the hidden layers to 20. You can freely set this value to any number you want since each neuron in the hidden layer will end up learning a feature.
###Code
hiddenUnits = 20
visibleUnits = len(user_rating_df.columns)
vb = tf.Variable(tf.zeros([visibleUnits]), tf.float32) #Number of unique movies
hb = tf.Variable(tf.zeros([hiddenUnits]), tf.float32) #Number of features we're going to learn
W = tf.Variable(tf.zeros([visibleUnits, hiddenUnits]), tf.float32)
###Output
_____no_output_____
###Markdown
We then move on to creating the visible and hidden layer units and setting their activation functions. In this case, we will be using the tf.sigmoid and tf.relu functions as nonlinear activations since it is commonly used in RBM's.
###Code
v0 = tf.zeros([visibleUnits], tf.float32)
#testing to see if the matrix product works
tf.matmul([v0], W)
#Phase 1: Input Processing
#defining a function to return only the generated hidden states
def hidden_layer(v0_state, W, hb):
h0_prob = tf.nn.sigmoid(tf.matmul([v0_state], W) + hb) #probabilities of the hidden units
h0_state = tf.nn.relu(tf.sign(h0_prob - tf.random.uniform(tf.shape(h0_prob)))) #sample_h_given_X
return h0_state
#printing output of zeros input
h0 = hidden_layer(v0, W, hb)
print("first 15 hidden states: ", h0[0][0:15])
def reconstructed_output(h0_state, W, vb):
v1_prob = tf.nn.sigmoid(tf.matmul(h0_state, tf.transpose(W)) + vb)
v1_state = tf.nn.relu(tf.sign(v1_prob - tf.random.uniform(tf.shape(v1_prob)))) #sample_v_given_h
return v1_state[0]
v1 = reconstructed_output(h0, W, vb)
print("hidden state shape: ", h0.shape)
print("v0 state shape: ", v0.shape)
print("v1 state shape: ", v1.shape)
###Output
_____no_output_____
###Markdown
And set the error function, which in this case will be the Mean Absolute Error Function.
###Code
def error(v0_state, v1_state):
return tf.reduce_mean(tf.square(v0_state - v1_state))
err = tf.reduce_mean(tf.square(v0 - v1))
print("error" , err.numpy())
###Output
_____no_output_____
###Markdown
Now we train the RBM with 5 epochs with each epoch using a batchsize of 500, giving 12 batches. After training, we print out a graph with the error by epoch.
###Code
epochs = 5
batchsize = 500
errors = []
weights = []
K=1
alpha = 0.1
#creating datasets
train_ds = \
tf.data.Dataset.from_tensor_slices((np.float32(trX))).batch(batchsize)
#for i in range(epochs):
# for start, end in zip( range(0, len(trX), batchsize), range(batchsize, len(trX), batchsize)):
# batch = trX[start:end]
# cur_w = sess.run(update_w, feed_dict={v0: batch, W: prv_w, vb: prv_vb, hb: prv_hb})
# cur_vb = sess.run(update_vb, feed_dict={v0: batch, W: prv_w, vb: prv_vb, hb: prv_hb})
# cur_nb = sess.run(update_hb, feed_dict={v0: batch, W: prv_w, vb: prv_vb, hb: prv_hb})
# prv_w = cur_w
# prv_vb = cur_vb
# prv_hb = cur_hb
# errors.append(sess.run(err_sum, feed_dict={v0: trX, W: cur_w, vb: cur_vb, hb: cur_hb}))
# print (errors[-1])
v0_state=v0
for epoch in range(epochs):
batch_number = 0
for batch_x in train_ds:
for i_sample in range(len(batch_x)):
for k in range(K):
v0_state = batch_x[i_sample]
h0_state = hidden_layer(v0_state, W, hb)
v1_state = reconstructed_output(h0_state, W, vb)
h1_state = hidden_layer(v1_state, W, hb)
delta_W = tf.matmul(tf.transpose([v0_state]), h0_state) - tf.matmul(tf.transpose([v1_state]), h1_state)
W = W + alpha * delta_W
vb = vb + alpha * tf.reduce_mean(v0_state - v1_state, 0)
hb = hb + alpha * tf.reduce_mean(h0_state - h1_state, 0)
v0_state = v1_state
if i_sample == len(batch_x)-1:
err = error(batch_x[i_sample], v1_state)
errors.append(err)
weights.append(W)
print ( 'Epoch: %d' % (epoch + 1),
"batch #: %i " % batch_number, "of %i" % (len(trX)/batchsize),
"sample #: %i" % i_sample,
'reconstruction error: %f' % err)
batch_number += 1
plt.plot(errors)
plt.ylabel('Error')
plt.xlabel('Epoch')
plt.show()
###Output
_____no_output_____
###Markdown
Recommendation We can now predict movies that an arbitrarily selected user might like. This can be accomplished by feeding in the user's watched movie preferences into the RBM and then reconstructing the input. The values that the RBM gives us will attempt to estimate the user's preferences for movies that he hasn't watched based on the preferences of the users that the RBM was trained on. Lets first select a User ID of our mock user:
###Code
mock_user_id = 215
#Selecting the input user
inputUser = trX[mock_user_id-1].reshape(1, -1)
inputUser = tf.convert_to_tensor(trX[mock_user_id-1],"float32")
v0 = inputUser
print(v0)
v0.shape
v0test = tf.zeros([visibleUnits], tf.float32)
v0test.shape
#Feeding in the user and reconstructing the input
hh0 = tf.nn.sigmoid(tf.matmul([v0], W) + hb)
vv1 = tf.nn.sigmoid(tf.matmul(hh0, tf.transpose(W)) + vb)
rec = vv1
tf.maximum(rec,1)
for i in vv1:
print(i)
###Output
_____no_output_____
###Markdown
We can then list the 20 most recommended movies for our mock user by sorting it by their scores given by our model.
###Code
scored_movies_df_mock = movies_df[movies_df['MovieID'].isin(user_rating_df.columns)]
scored_movies_df_mock = scored_movies_df_mock.assign(RecommendationScore = rec[0])
scored_movies_df_mock.sort_values(["RecommendationScore"], ascending=False).head(20)
###Output
_____no_output_____
###Markdown
So, how to recommend the movies that the user has not watched yet? Now, we can find all the movies that our mock user has watched before:
###Code
movies_df_mock = ratings_df[ratings_df['UserID'] == mock_user_id]
movies_df_mock.head()
###Output
_____no_output_____
###Markdown
In the next cell, we merge all the movies that our mock users has watched with the predicted scores based on his historical data:
###Code
#Merging movies_df with ratings_df by MovieID
merged_df_mock = scored_movies_df_mock.merge(movies_df_mock, on='MovieID', how='outer')
###Output
_____no_output_____
###Markdown
lets sort it and take a look at the first 20 rows:
###Code
merged_df_mock.sort_values(["RecommendationScore"], ascending=False).head(20)
###Output
_____no_output_____
###Markdown
So, how to recommend the movies that the user has not watched yet? Now, we can find all the movies that our mock user has watched before:
###Code
movies_df_mock = ratings_df[ratings_df['UserID'] == mock_user_id]
movies_df_mock.head()
###Output
_____no_output_____
###Markdown
In the next cell, we merge all the movies that our mock users has watched with the predicted scores based on his historical data:
###Code
#Merging movies_df with ratings_df by MovieID
merged_df_mock = scored_movies_df_mock.merge(movies_df_mock, on='MovieID', how='outer')
###Output
_____no_output_____
###Markdown
lets sort it and take a look at the first 20 rows:
###Code
merged_df_mock.sort_values(["RecommendationScore"], ascending=False).head(20)
###Output
_____no_output_____ |
build_training_sets.ipynb | ###Markdown
Build 2D training set using brainweb phantomsCreated on July 2020Abi MehranianEmail: [email protected] 1- build a 2D PET object for mMR scanner
###Code
import numpy as np
from matplotlib import pyplot as plt
from geometry.BuildGeometry_v4 import BuildGeometry_v4
from models.deeplib import buildBrainPhantomDataset
# build PET recontruction object
temPath = r'C:\pythonWorkSpace\tmp003'
PET = BuildGeometry_v4('mmr',0.5) #scanner mmr, with radial crop factor of 50%
PET.loadSystemMatrix(temPath,is3d=False)
# get some info of Pet object
print('is3d:',PET.is3d)
print('\nscanner info:', PET.scanner.as_dict())
print('\nimage info:',PET.image.as_dict())
print('\nsinogram info:',PET.sinogram.as_dict())
###Output
is3d: False
scanner info: {'model_number': 2008, 'circularGantry': 1, 'nBuckets': 224, 'nBlockRings': 8, 'nBlockPerRing': 56, 'nPhysCrystalsPerBlock': 8, 'useVirtualCrystal': 1, 'detectorRadiusCm': 32.8, 'sinogramDOIcm': 0.67, 'LORDOIcm': 0.96, 'rCrystalDimCm': 2.0, 'xCrystalDimCm': 0.41725, 'zCrystalDimCm': 0.40625, 'transaxialFovCm': 60.0, 'maxRingDiff': 60, 'coinciWindowWidthNsec': 5.85938, 'tofResolutionNsec': 5.85938, 'tofOffsetNsec': 0, 'nCrystalsPerBlock': 9, 'nCrystalsPerRing': 504, 'nCrystalRings': 64, 'effDetectorRadiusCm': 33.76, 'isTof': False, 'TofBinWidthNsec': 5.85938, 'planeSepCm': 0.203125}
image info: {'matrixSize': [172, 172, 127], 'voxelSizeCm': [0.208625, 0.208625, 0.203125], 'reconFovRadious': 24.0}
sinogram info: {'radialBinCropfactor': 0.5, 'nRadialBins_orig': 344, 'nRadialBins': 172, 'nMash': 1, 'span': 11, 'nSegments': 11, 'nTofBins': 1, 'nAngularBins': 252, 'numberOfPlanesPerSeg': array([ 27, 49, 71, 93, 115, 127, 115, 93, 71, 49, 27]), 'totalNumberOfSinogramPlanes': 837}
###Markdown
2- download brainweb phantoms (automatically) and prepare training sets
###Code
# this will take hours (5 phantoms, 5 random rotations each, lesion & sinogram simulation, 3 different recon,...)
# see 'buildBrainPhantomDataset' for default values, e.g. count level, psf, no. lesions, lesion size, no. rotations, rotation range,....
# LD/ld stands for low-definition low-dose, HD/hd stands for high-definition high-dose
phanPath = r'C:\phantoms\brainWeb'
save_training_dir = r'C:\MoDL\trainingDatasets\brainweb\2D'
phanType ='brainweb'
phanNumber = np.arange(0,5,1) # use first 5 brainweb phantoms out of 20
buildBrainPhantomDataset(PET, save_training_dir, phanPath, phanType =phanType, phanNumber = phanNumber,is3d = False, num_rand_rotations=5)
# check out the strcuture of the produced datasets, e.g. data-0.npy
d = np.load(save_training_dir+ '\\' + 'data-0.npy',allow_pickle=True).item()
d.keys()
fig, ax = plt.subplots(1,4,figsize=(20,10))
ax[0].imshow(d['mrImg'],cmap='gist_gray'),ax[0].set_title('mrImg',fontsize=20)
ax[1].imshow(d['imgHD'],cmap='gist_gray_r'),ax[1].set_title('imgHD',fontsize=20)
ax[2].imshow(d['imgLD'],cmap='gist_gray_r'),ax[2].set_title('imgLD',fontsize=20)
ax[3].imshow(d['imgLD_psf'],cmap='gist_gray_r'),ax[3].set_title('imgLD_psf',fontsize=20)
fig, ax = plt.subplots(1,2,figsize=(20,10))
ax[0].imshow(d['sinoLD']),ax[0].set_title('sinoLD',fontsize=20)
ax[1].imshow(d['AN']),ax[1].set_title('Atten. factors * Norm. Factors (AN)',fontsize=20)
###Output
_____no_output_____ |
samples/balloon/.ipynb_checkpoints/inspect_balloon_model-checkpoint.ipynb | ###Markdown
Mask R-CNN - Inspect ScrANTon Trained ModelCode and visualizations to test, debug, and evaluate the Mask R-CNN model.
###Code
import os
import sys
import random
import math
import re
import time
import numpy as np
import tensorflow as tf
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# Root directory of the project
ROOT_DIR = os.path.abspath(r"/home/simulation/Documents/Github/ScrANTonTracker/ScrANTonTrackerLAB/")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
import mrcnn.model as modellib
from mrcnn.model import log
from ants import ants
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Path to Ballon trained weights
# You can download this file from the Releases page
# https://github.com/matterport/Mask_RCNN/releases
BALLON_WEIGHTS_PATH = r"/home/simulation/Documents/TITANlogs/TRAINEDFULLANTS824.h5" # TODO: update this path
MODEL_DIR = BALLON_WEIGHTS_PATH
###Output
_____no_output_____
###Markdown
Configurations
###Code
config = ants.AntConfig()
BALLOON_DIR = os.path.join(ROOT_DIR, "data")
# Override the training configurations with a few
# changes for inferencing.
class InferenceConfig(config.__class__):
# Run detection on one image at a time
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
###Output
_____no_output_____
###Markdown
Notebook Preferences
###Code
# Device to load the neural network on.
# Useful if you're training a model on the same
# machine, in which case use CPU and leave the
# GPU for training.
DEVICE = "/cpu:0" # /cpu:0 or /gpu:0
# Inspect the model in training or inference modes
# values: 'inference' or 'training'
# TODO: code for 'training' test mode not ready yet
TEST_MODE = "inference"
def get_ax(rows=1, cols=1, size=16):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Adjust the size attribute to control how big to render images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
###Output
_____no_output_____
###Markdown
Load Validation Dataset
###Code
# Load validation dataset
dataset = ants.AntDataset()
dataset.load_ant(BALLOON_DIR, "val")
# Must call before using the dataset
dataset.prepare()
print("Images: {}\nClasses: {}".format(len(dataset.image_ids), dataset.class_names))
###Output
_____no_output_____
###Markdown
Load Model
###Code
# Create model in inference mode
with tf.device(DEVICE):
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR,
config=config)
# Set path to balloon weights file
# Download file from the Releases page and set its path
# https://github.com/matterport/Mask_RCNN/releases
weights_path = r"/home/simulation/Documents/TITANlogs/TRAINEDFULLANTS824.h5"
# Or, load the last model you trained
# weights_path = model.find_last()
# Load weights
print("Loading weights ", weights_path)
model.load_weights(weights_path, by_name=True)
###Output
_____no_output_____
###Markdown
Run Detection
###Code
image_id = random.choice(dataset.image_ids)
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset, config, image_id, use_mini_mask=False)
info = dataset.image_info[image_id]
print("image ID: {}.{} ({}) {}".format(info["source"], info["id"], image_id,
dataset.image_reference(image_id)))
# Run object detection
results = model.detect([image], verbose=1)
# Display results
ax = get_ax(1)
r = results[0]
visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
dataset.class_names, r['scores'], ax=ax,
title="Predictions")
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
###Output
_____no_output_____
###Markdown
Color SplashThis is for illustration. You can call `balloon.py` with the `splash` option to get better images without the black padding.
###Code
splash = ants.color_splash(image, r['masks'])
display_images([splash], cols=1)
###Output
_____no_output_____
###Markdown
Step by Step Prediction Stage 1: Region Proposal NetworkThe Region Proposal Network (RPN) runs a lightweight binary classifier on a lot of boxes (anchors) over the image and returns object/no-object scores. Anchors with high *objectness* score (positive anchors) are passed to the stage two to be classified.Often, even positive anchors don't cover objects fully. So the RPN also regresses a refinement (a delta in location and size) to be applied to the anchors to shift it and resize it a bit to the correct boundaries of the object. 1.a RPN TargetsThe RPN targets are the training values for the RPN. To generate the targets, we start with a grid of anchors that cover the full image at different scales, and then we compute the IoU of the anchors with ground truth object. Positive anchors are those that have an IoU >= 0.7 with any ground truth object, and negative anchors are those that don't cover any object by more than 0.3 IoU. Anchors in between (i.e. cover an object by IoU >= 0.3 but < 0.7) are considered neutral and excluded from training.To train the RPN regressor, we also compute the shift and resizing needed to make the anchor cover the ground truth object completely.
###Code
# Generate RPN trainig targets
# target_rpn_match is 1 for positive anchors, -1 for negative anchors
# and 0 for neutral anchors.
target_rpn_match, target_rpn_bbox = modellib.build_rpn_targets(
image.shape, model.anchors, gt_class_id, gt_bbox, model.config)
log("target_rpn_match", target_rpn_match)
log("target_rpn_bbox", target_rpn_bbox)
positive_anchor_ix = np.where(target_rpn_match[:] == 1)[0]
negative_anchor_ix = np.where(target_rpn_match[:] == -1)[0]
neutral_anchor_ix = np.where(target_rpn_match[:] == 0)[0]
positive_anchors = model.anchors[positive_anchor_ix]
negative_anchors = model.anchors[negative_anchor_ix]
neutral_anchors = model.anchors[neutral_anchor_ix]
log("positive_anchors", positive_anchors)
log("negative_anchors", negative_anchors)
log("neutral anchors", neutral_anchors)
# Apply refinement deltas to positive anchors
refined_anchors = utils.apply_box_deltas(
positive_anchors,
target_rpn_bbox[:positive_anchors.shape[0]] * model.config.RPN_BBOX_STD_DEV)
log("refined_anchors", refined_anchors, )
# Display positive anchors before refinement (dotted) and
# after refinement (solid).
visualize.draw_boxes(image, boxes=positive_anchors, refined_boxes=refined_anchors, ax=get_ax())
###Output
_____no_output_____
###Markdown
1.b RPN PredictionsHere we run the RPN graph and display its predictions.
###Code
# Run RPN sub-graph
pillar = model.keras_model.get_layer("ROI").output # node to start searching from
# TF 1.4 and 1.9 introduce new versions of NMS. Search for all names to support TF 1.3~1.10
nms_node = model.ancestor(pillar, "ROI/rpn_non_max_suppression:0")
if nms_node is None:
nms_node = model.ancestor(pillar, "ROI/rpn_non_max_suppression/NonMaxSuppressionV2:0")
if nms_node is None: #TF 1.9-1.10
nms_node = model.ancestor(pillar, "ROI/rpn_non_max_suppression/NonMaxSuppressionV3:0")
rpn = model.run_graph([image], [
("rpn_class", model.keras_model.get_layer("rpn_class").output),
("pre_nms_anchors", model.ancestor(pillar, "ROI/pre_nms_anchors:0")),
("refined_anchors", model.ancestor(pillar, "ROI/refined_anchors:0")),
("refined_anchors_clipped", model.ancestor(pillar, "ROI/refined_anchors_clipped:0")),
("post_nms_anchor_ix", nms_node),
("proposals", model.keras_model.get_layer("ROI").output),
])
# Show top anchors by score (before refinement)
limit = 100
sorted_anchor_ids = np.argsort(rpn['rpn_class'][:,:,1].flatten())[::-1]
visualize.draw_boxes(image, boxes=model.anchors[sorted_anchor_ids[:limit]], ax=get_ax())
# Show top anchors with refinement. Then with clipping to image boundaries
limit = 50
ax = get_ax(1, 2)
pre_nms_anchors = utils.denorm_boxes(rpn["pre_nms_anchors"][0], image.shape[:2])
refined_anchors = utils.denorm_boxes(rpn["refined_anchors"][0], image.shape[:2])
refined_anchors_clipped = utils.denorm_boxes(rpn["refined_anchors_clipped"][0], image.shape[:2])
visualize.draw_boxes(image, boxes=pre_nms_anchors[:limit],
refined_boxes=refined_anchors[:limit], ax=ax[0])
visualize.draw_boxes(image, refined_boxes=refined_anchors_clipped[:limit], ax=ax[1])
# Show refined anchors after non-max suppression
limit = 50
ixs = rpn["post_nms_anchor_ix"][:limit]
visualize.draw_boxes(image, refined_boxes=refined_anchors_clipped[ixs], ax=get_ax())
# Show final proposals
# These are the same as the previous step (refined anchors
# after NMS) but with coordinates normalized to [0, 1] range.
limit = 50
# Convert back to image coordinates for display
h, w = config.IMAGE_SHAPE[:2]
proposals = rpn['proposals'][0, :limit] * np.array([h, w, h, w])
visualize.draw_boxes(image, refined_boxes=proposals, ax=get_ax())
###Output
_____no_output_____
###Markdown
Stage 2: Proposal ClassificationThis stage takes the region proposals from the RPN and classifies them. 2.a Proposal ClassificationRun the classifier heads on proposals to generate class propbabilities and bounding box regressions.
###Code
# Get input and output to classifier and mask heads.
mrcnn = model.run_graph([image], [
("proposals", model.keras_model.get_layer("ROI").output),
("probs", model.keras_model.get_layer("mrcnn_class").output),
("deltas", model.keras_model.get_layer("mrcnn_bbox").output),
("masks", model.keras_model.get_layer("mrcnn_mask").output),
("detections", model.keras_model.get_layer("mrcnn_detection").output),
])
# Get detection class IDs. Trim zero padding.
det_class_ids = mrcnn['detections'][0, :, 4].astype(np.int32)
det_count = np.where(det_class_ids == 0)[0][0]
det_class_ids = det_class_ids[:det_count]
detections = mrcnn['detections'][0, :det_count]
print("{} detections: {}".format(
det_count, np.array(dataset.class_names)[det_class_ids]))
captions = ["{} {:.3f}".format(dataset.class_names[int(c)], s) if c > 0 else ""
for c, s in zip(detections[:, 4], detections[:, 5])]
visualize.draw_boxes(
image,
refined_boxes=utils.denorm_boxes(detections[:, :4], image.shape[:2]),
visibilities=[2] * len(detections),
captions=captions, title="Detections",
ax=get_ax())
###Output
_____no_output_____
###Markdown
2.c Step by Step DetectionHere we dive deeper into the process of processing the detections.
###Code
# Proposals are in normalized coordinates. Scale them
# to image coordinates.
h, w = config.IMAGE_SHAPE[:2]
proposals = np.around(mrcnn["proposals"][0] * np.array([h, w, h, w])).astype(np.int32)
# Class ID, score, and mask per proposal
roi_class_ids = np.argmax(mrcnn["probs"][0], axis=1)
roi_scores = mrcnn["probs"][0, np.arange(roi_class_ids.shape[0]), roi_class_ids]
roi_class_names = np.array(dataset.class_names)[roi_class_ids]
roi_positive_ixs = np.where(roi_class_ids > 0)[0]
# How many ROIs vs empty rows?
print("{} Valid proposals out of {}".format(np.sum(np.any(proposals, axis=1)), proposals.shape[0]))
print("{} Positive ROIs".format(len(roi_positive_ixs)))
# Class counts
print(list(zip(*np.unique(roi_class_names, return_counts=True))))
# Display a random sample of proposals.
# Proposals classified as background are dotted, and
# the rest show their class and confidence score.
limit = 200
ixs = np.random.randint(0, proposals.shape[0], limit)
captions = ["{} {:.3f}".format(dataset.class_names[c], s) if c > 0 else ""
for c, s in zip(roi_class_ids[ixs], roi_scores[ixs])]
visualize.draw_boxes(image, boxes=proposals[ixs],
visibilities=np.where(roi_class_ids[ixs] > 0, 2, 1),
captions=captions, title="ROIs Before Refinement",
ax=get_ax())
###Output
_____no_output_____
###Markdown
Apply Bounding Box Refinement
###Code
# Class-specific bounding box shifts.
roi_bbox_specific = mrcnn["deltas"][0, np.arange(proposals.shape[0]), roi_class_ids]
log("roi_bbox_specific", roi_bbox_specific)
# Apply bounding box transformations
# Shape: [N, (y1, x1, y2, x2)]
refined_proposals = utils.apply_box_deltas(
proposals, roi_bbox_specific * config.BBOX_STD_DEV).astype(np.int32)
log("refined_proposals", refined_proposals)
# Show positive proposals
# ids = np.arange(roi_boxes.shape[0]) # Display all
limit = 5
ids = np.random.randint(0, len(roi_positive_ixs), limit) # Display random sample
captions = ["{} {:.3f}".format(dataset.class_names[c], s) if c > 0 else ""
for c, s in zip(roi_class_ids[roi_positive_ixs][ids], roi_scores[roi_positive_ixs][ids])]
visualize.draw_boxes(image, boxes=proposals[roi_positive_ixs][ids],
refined_boxes=refined_proposals[roi_positive_ixs][ids],
visibilities=np.where(roi_class_ids[roi_positive_ixs][ids] > 0, 1, 0),
captions=captions, title="ROIs After Refinement",
ax=get_ax())
###Output
_____no_output_____
###Markdown
Filter Low Confidence Detections
###Code
# Remove boxes classified as background
keep = np.where(roi_class_ids > 0)[0]
print("Keep {} detections:\n{}".format(keep.shape[0], keep))
# Remove low confidence detections
keep = np.intersect1d(keep, np.where(roi_scores >= config.DETECTION_MIN_CONFIDENCE)[0])
print("Remove boxes below {} confidence. Keep {}:\n{}".format(
config.DETECTION_MIN_CONFIDENCE, keep.shape[0], keep))
###Output
_____no_output_____
###Markdown
Per-Class Non-Max Suppression
###Code
# Apply per-class non-max suppression
pre_nms_boxes = refined_proposals[keep]
pre_nms_scores = roi_scores[keep]
pre_nms_class_ids = roi_class_ids[keep]
nms_keep = []
for class_id in np.unique(pre_nms_class_ids):
# Pick detections of this class
ixs = np.where(pre_nms_class_ids == class_id)[0]
# Apply NMS
class_keep = utils.non_max_suppression(pre_nms_boxes[ixs],
pre_nms_scores[ixs],
config.DETECTION_NMS_THRESHOLD)
# Map indicies
class_keep = keep[ixs[class_keep]]
nms_keep = np.union1d(nms_keep, class_keep)
print("{:22}: {} -> {}".format(dataset.class_names[class_id][:20],
keep[ixs], class_keep))
keep = np.intersect1d(keep, nms_keep).astype(np.int32)
print("\nKept after per-class NMS: {}\n{}".format(keep.shape[0], keep))
# Show final detections
ixs = np.arange(len(keep)) # Display all
# ixs = np.random.randint(0, len(keep), 10) # Display random sample
captions = ["{} {:.3f}".format(dataset.class_names[c], s) if c > 0 else ""
for c, s in zip(roi_class_ids[keep][ixs], roi_scores[keep][ixs])]
visualize.draw_boxes(
image, boxes=proposals[keep][ixs],
refined_boxes=refined_proposals[keep][ixs],
visibilities=np.where(roi_class_ids[keep][ixs] > 0, 1, 0),
captions=captions, title="Detections after NMS",
ax=get_ax())
###Output
_____no_output_____
###Markdown
Stage 3: Generating MasksThis stage takes the detections (refined bounding boxes and class IDs) from the previous layer and runs the mask head to generate segmentation masks for every instance. 3.a Mask TargetsThese are the training targets for the mask branch
###Code
display_images(np.transpose(gt_mask, [2, 0, 1]), cmap="Blues")
###Output
_____no_output_____
###Markdown
3.b Predicted Masks
###Code
# Get predictions of mask head
mrcnn = model.run_graph([image], [
("detections", model.keras_model.get_layer("mrcnn_detection").output),
("masks", model.keras_model.get_layer("mrcnn_mask").output),
])
# Get detection class IDs. Trim zero padding.
det_class_ids = mrcnn['detections'][0, :, 4].astype(np.int32)
det_count = np.where(det_class_ids == 0)[0][0]
det_class_ids = det_class_ids[:det_count]
print("{} detections: {}".format(
det_count, np.array(dataset.class_names)[det_class_ids]))
# Masks
det_boxes = utils.denorm_boxes(mrcnn["detections"][0, :, :4], image.shape[:2])
det_mask_specific = np.array([mrcnn["masks"][0, i, :, :, c]
for i, c in enumerate(det_class_ids)])
det_masks = np.array([utils.unmold_mask(m, det_boxes[i], image.shape)
for i, m in enumerate(det_mask_specific)])
log("det_mask_specific", det_mask_specific)
log("det_masks", det_masks)
i==1.13.0
display_images(det_mask_specific[:4] * 255, cmap="Blues", interpolation="none")
display_images(det_masks[:4] * 255, cmap="Blues", interpolation="none")
###Output
_____no_output_____
###Markdown
Visualize ActivationsIn some cases it helps to look at the output from different layers and visualize them to catch issues and odd patterns.
###Code
# Get activations of a few sample layers
activations = model.run_graph([image], [
("input_image", tf.identity(model.keras_model.get_layer("input_image").output)),
("res2c_out", model.keras_model.get_layer("res2c_out").output),
("res3c_out", model.keras_model.get_layer("res3c_out").output),
("res4w_out", model.keras_model.get_layer("res4w_out").output), # for resnet100
("rpn_bbox", model.keras_model.get_layer("rpn_bbox").output),
("roi", model.keras_model.get_layer("ROI").output),
])
# Input image (normalized)
_ = plt.imshow(modellib.unmold_image(activations["input_image"][0],config))
# Backbone feature map
display_images(np.transpose(activations["res2c_out"][0,:,:,:4], [2, 0, 1]), cols=4)
###Output
_____no_output_____ |
_notebooks/2022-03-23-SthlmMeanTemperature.ipynb | ###Markdown
Forecasting temperature with SARIMA- toc: true- author: Andreas Palmgren- use_math: true 1. Introduction> Weather forecasting is a difficult but important task. All predictions have a degree of uncertainty, but the chaotic character of our atmosphere causes weather forecasting to become especially challenging. As you might have experienced yourself, day-to-day weather prediction becomes unreliable more than a week into the future. > **The aim for this project is to find an appropriate ARIMA model able to forecast monthly mean air temperature in Stockholm.**
###Code
#collapse
library(plotly)
library(ggplot2)
library(ggfortify)
library(repr)
library(tidyr)
library(tsibble)
library(TSstudio)
library(zoo)
library(dplyr)
library(tseries)
library(forecast)
library(ggridges)
library(viridis)
library(hrbrthemes)
library(heatmaply)
library(gridExtra)
Sys.setlocale(locale = "English")
windowsFonts(Times=windowsFont("Times New Roman"))
options(repr.plot.width=14, repr.plot.height=8)
# Center position of plots
IRdisplay::display_html('<style>.output_png {
display: table-cell;
text-align: center;
vertical-align: middle;
}</style>')
th <- theme(text=element_text(size = 20, face = "bold", family="Times"),
plot.title = element_text(size = 25, face = "bold", family="Times", hjust = 0.5),
panel.grid.major.x = element_blank(),
panel.grid.minor.x = element_blank(),
panel.background = element_rect(fill = "white"),
panel.grid = element_line(size = 0.25, linetype = 'solid',
colour = "grey80"))
###Output
_____no_output_____
###Markdown
2. Data source> The underlying data consist of historical weather observation of monthly mean air temperature in Stockholm. >**Reference**Anders Moberg (2021) Stockholm Historical Weather Observations — Monthly mean air temperatures since 1756. Dataset version 3. Bolin Centre Database. https://doi.org/10.17043/stockholm-historical-monthly-temperature-3
###Code
train_df <- read.csv("dataset/train_stockholm_monthly_mean_temperature.csv", sep=';')
test_df <- read.csv("dataset/test_stockholm_monthly_mean_temperature.csv", sep=';')
df <- rbind(train_df, test_df)
head(df)
#collapse
train <- ts(as.vector(t(as.matrix(train_df[,-1]))), start=c(min(train_df$year), 1),
end=c(max(train_df$year), 12), frequency =12)
test <- ts(as.vector(t(as.matrix(test_df[,-1]))), start=c(2017, 1),
end=c(2020, 12), frequency =12)
df_ts <- ts(as.vector(t(as.matrix(df[,-1]))), start=c(1980, 1),
end=c(2020, 12), frequency =12)
# Reshaped version for visualization
train_reshape <- data.frame(date=as.Date(as.yearmon(time(train))), temp=as.matrix(train))
train_reshape$month <- format(train_reshape$date, "%b")
train_reshape$year <- format(train_reshape$date, "%Y")
###Output
_____no_output_____
###Markdown
3. Data analysis 3.1 Data visualization> We must gain a better understanding of our time series by inspection, and the most intuitive way of doing this is through a line plot. As seen in the figure below, our time series is clearly not stationary. It does not appear to be any obvious trend but a seasonal pattern can be seen and we should investigate this further. Our dataset might suffer from seasonal outliers, as some peaks deviate heavily from others.
###Code
#collapse
ggplot(data=train_reshape, aes(x=date, y=temp, group=1))+
ggtitle("Monthly Mean Air Temperature (°C)")+
geom_line(color="blue", size=1.2)+
xlab("\nYear") + ylab("°C") +
th + theme(legend.position="none")
###Output
_____no_output_____
###Markdown
> Dealing with monthly data, a ridgeplot over all months gives further insight. Annual seasonal pattern is strong. January and February might suffer from outliers and should be investigated.
###Code
#collapse
ggplot(train_reshape, aes(x = temp, y = factor(month, levels = month.abb), fill = ..x..))+
geom_density_ridges_gradient(scale = 3, rel_min_height = 0.01, lwd=1.2) +
scale_fill_viridis(name = "Temp. [F]", option = "A") +
labs(title = 'Ridge plot for Monthly temperatures (°C)') +
scale_y_discrete(limits=rev) +
xlab("°C") + ylab("") +
th + theme(axis.line = element_line(size = 2, colour = "grey80"))
###Output
Picking joint bandwidth of 0.787
###Markdown
> Boxplot is a better alternative to visualize outliers. Variance differ between months.
###Code
#collapse
ggplot(train_reshape, aes(x = factor(month, levels = month.abb), y = temp, fill=factor(month, levels = month.abb)))+
geom_boxplot(outlier.alpha = 0, alpha=0.3, lwd=1.2) +
geom_jitter(size=2.4, position=position_jitter(0.23), aes(colour=factor(month, levels = month.abb)))+
xlab("\nMonth") + ylab("°C") +
th + theme(legend.position="none")
###Output
_____no_output_____
###Markdown
> By inspecting a seasonal subseries plot, we see how potential outliers reside in the earlier years. Our dataset might suffer from strucural breaks.
###Code
#collapse
ggsubseriesplot(train) +
ylab("Temperature") +
ggtitle("Seasonal subseries plot") +
th
###Output
_____no_output_____
###Markdown
> Dealing with temperature, a heatmap could also be an intuitive way of visualizing our dataset. The lowest temperatures are recorded during February 1985 and January 1987. We have already identified these outliers. But it also appears to become warmer in July and August after 1993..
###Code
#collapse
dt <- as.matrix(train_df[,-1])
ggplot(train_reshape, aes(factor(month, levels = month.abb), year, fill= temp)) +
geom_tile() +
scale_fill_distiller(palette = "RdBu") +
th
###Output
_____no_output_____
###Markdown
> Our seasonal component is clear at this point and is once again confirmed by plotting the sample autocorrelation function (ACF). No clear trend seem to be apparent but strong seasonal component.
###Code
#collapse
ggAcf(train, lag.max = 36) +
labs(title = 'ACF plot')+
th
###Output
_____no_output_____
###Markdown
> With monthly data, our dominating period is obviosly 12. The periodogram can be useful to identify other periods or if one is unsure of dominating period. Following periodogram is calculated using a fast Fourier transform and smooth through a series of modified Daniell smoothers.
###Code
spec.pgram(train, log="no", main="Raw Periodogram")
###Output
_____no_output_____
###Markdown
3.2 Stationarity > Seasonal differencing is the difference between the observation and corresponding observation from pervious year.$$y_t' = y_t - y_{t-12}$$After taking the first seasonal difference, the time series do look staionary.
###Code
#collapse
# Seasonal difference
train.diff = diff(train, lag=12)
autoplot(train.diff, xlab="Year", ylab="Temperature (°C)")+
ggtitle("First Seasonal Difference")+
th +
guides(colour = guide_legend(title.hjust = 20))
###Output
_____no_output_____
###Markdown
**Augmented Dicker-Fuller test**. > Rejection of null hypothesis, evidence of stationarity. It should be noted however most unit root tests do have a high type 1 error rate, that is incorrect rejection of a true null hypothesis.
###Code
adf.test(train.diff)
###Output
Warning message in adf.test(train.diff):
"p-value smaller than printed p-value"
###Markdown
**Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test**.> Check if the time series is stationary around a deterministic trend. Presence of unit root is not the null hypothesis but rather the alternative.
###Code
kpss.test(train.diff)
###Output
Warning message in kpss.test(train.diff):
"p-value greater than printed p-value"
###Markdown
3.3 Possible transformations> The dataset has some problematic characteristics which might not be captured by an ordinary ARIMA model. Our training period is long, with 40 years of observations. How old observations are still relevant in predicting future temperature? Signs of structural breaks have been spotted and it might prove wise to use a shorter training period. 4. Model buildingModeling will be done through manual configuration as well as automatic selection. Our chosen model will be investigated for residual errors.Let d and D be nonnegative integers, then {$X_t$} is a $SARIMA(p, d, q)\times(P,D,Q)_s$ process is defined by$$\phi(B) \Phi(B^s) (1-B)^d (1-B^s)^DX_t = \theta(B) \Theta(B^s) Z_t, \quad {Z_t} \sim WN(0, \sigma^2)$$where * $\phi(z) = 1- \phi_1 z - ... - \phi_p z^p $* $\Phi(z) = 1- \Phi_1 z - ... - \Phi_P z^P $* $\theta(z)= 1- \theta_1 z - ... - \theta_q z^q $* $\Theta(z)= 1- \Theta_1 z - ... - \Theta_Q z^Q $ 4.1 Manually configured > Both the ACF and PACF have nonseasonal spikes at lag 1 which are then cut off. This would suggest starting with having p and q equal to one. The ACF has a seasonal spike at lag 12 which then cuts off, while PACF seasonal spikes tails off. It would suggest our time series being described by a seasonal moving average model with one seasonal term (Q = 1). $$\text{ARIMA}(1,0,1) \times (0, 1, 1)_{12}$$
###Code
#collapse
p1 = ggAcf(train.diff, lag.max = 36) +
ggtitle("ACF plot")+
th
p2 = ggPacf(train.diff, lag.max = 36) +
ggtitle("PACF plot")+
th
grid.arrange(p1, p2, ncol=2)
###Output
_____no_output_____
###Markdown
4.2 Automatic configuration > One automatic configuration would be a grid search of hyperparameters. The model with lowest AIC came out to be the same model as our manual configuration apart from one additional autoregressive term.
###Code
#collapse
p <- q <- P <- Q <- 0:3
grid <- expand.grid(p=p, q=q, P=P, Q=Q)
grid$k <- rowSums(grid)
grid <- grid %>% filter(k<=4)
arima_search <- lapply(1:nrow(grid), function(i){
mdl <- NULL
mdl <- try(arima(train, order=c(grid$p[i], 0, grid$q[i]),
seasonal=list(order=c(grid$P[i], 1, grid$Q[i]), period=12),
optim.control = list(maxit = 1000)))
aic_scores <- data.frame(p = grid$p[i], d=0, q=grid$q[i], P=grid$P[i], D=1, Q=grid$Q[i], AIC = mdl$aic)
}) %>% bind_rows() %>% arrange(AIC)
head(arima_search)
###Output
_____no_output_____
###Markdown
> **Auto arima** does not return the model with lowest AIC score, but has several requirements for what is considered a good model. It will not return models with roots close to the unit circle since forecasts would be numerically unstable.
###Code
mdl = auto.arima(train)
print(mdl)
###Output
Series: train
ARIMA(0,0,2)(2,1,0)[12] with drift
Coefficients:
ma1 ma2 sar1 sar2 drift
0.3054 0.0881 -0.6507 -0.3184 0.0054
s.e. 0.0478 0.0467 0.0467 0.0459 0.0063
sigma^2 = 4.877: log likelihood = -955.75
AIC=1923.51 AICc=1923.71 BIC=1947.92
###Markdown
5. Diagnostic checking> Let us investigare our manually configered model.$$\text{ARIMA}(1,0,1) \times (0, 1, 1)_{12}$$
###Code
mdl <- arima(train, order = c(1, 0, 1), seasonal=list(order=c(0,1,1)), optim.control = list(maxit = 1000))
checkresiduals(mdl)
Box.test(mdl$residuals)
cpgram(mdl$residuals, main="Cumulative Periodogram of the residuals")
###Output
_____no_output_____
###Markdown
> A shipiro wilk-test would reveale evidence against residuals being normally distributed. The QQ-plot help in identifying this departure and it becomes apparent our seasonal outliers are not captured in the model. This is unfortunately a limitation of ARIMA models.
###Code
#collapse
qqnorm(mdl$residuals, pch=1, frame=FALSE)
qqline(mdl$residuals, col="steelblue", lwd=2)
shapiro.test(mdl$residuals)
###Output
_____no_output_____
###Markdown
6. Forecasting> Let us try and forecast with out model$$\text{ARIMA}(1,0,1) \times (0, 1, 1)_{12}$$
###Code
#collapse
pred <- predict(mdl, n.ahead=48)
pred_reshape <- data.frame(date=as.Date(as.yearmon(time(pred$pred))), pred=as.matrix(pred$pred))
se_reshape <- data.frame(date=as.Date(as.yearmon(time(pred$se))), se=as.matrix(pred$se))
test_reshape <- data.frame(date=as.Date(as.yearmon(time(test))), temp=as.matrix(test))
predict <- merge(pred_reshape, se_reshape, by="date") %>% merge(test_reshape, by="date")
#collapse
ggplot() +
geom_line(data=predict, aes(x=date, y = pred), color="blue", size=1.3) +
geom_ribbon(data=predict, aes(x=date, ymax=pred+2*se, ymin=pred-2*se), fill="pink", alpha=.5) +
geom_line(data=tail(rbind(train_reshape[,c("date", "temp")], test_reshape), n=72), aes(x=date, y = temp), size=1.2) +
ggtitle("Forecast of Monthly Mean Air Temperature") +
xlab("\nYear") + ylab("°C") +
th
###Output
_____no_output_____ |
_site/content/distributions/change-of-variables.ipynb | ###Markdown
How do distributions transform under a change of variables ?Kyle Cranmer, March 2016
###Code
%pylab inline --no-import-all
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
We are interested in understanding how distributions transofrm under a change of variables.Let's start with a simple example. Think of a spinner like on a game of twister. We flick the spinner and it stops. Let's call the angle of the pointer $x$. It seems a safe assumption that the distribution of $x$ is uniform between $[0,2\pi)$... so $p_x(x) = 1/\sqrt{2\pi}$Now let's say that we change variables to $y=\cos(x)$ (sorry if the names are confusing here, don't think about x- and y-coordinates, these are just names for generic variables). The question is this:** what is the distribution of y?** Let's call it $p_y(y)$Well it's easy to do with a simulation, let's try it out
###Code
# generate samples for x, evaluate y=cos(x)
n_samples = 100000
x = np.random.uniform(0,2*np.pi,n_samples)
y = np.cos(x)
# make a histogram of x
n_bins = 50
counts, bins, patches = plt.hist(x, bins=50, normed=True, alpha=0.3)
plt.plot([0,2*np.pi], (1./2/np.pi, 1./2/np.pi), lw=2, c='r')
plt.xlim(0,2*np.pi)
plt.xlabel('x')
plt.ylabel('$p_x(x)$')
###Output
_____no_output_____
###Markdown
Ok, now let's make a histogram for $y=\cos(x)$
###Code
counts, y_bins, patches = plt.hist(y, bins=50, normed=True, alpha=0.3)
plt.xlabel('y')
plt.ylabel('$p_y(y)$')
###Output
_____no_output_____
###Markdown
It's not uniform! Why is that? Let's look at the $x-y$ relationship
###Code
# make a scatter of x,y
plt.scatter(x[:300],y[:300]) #just the first 300 points
xtest = .2
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')
xtest = xtest+.1
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')
xtest = 2*np.pi-xtest
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')
xtest = xtest+.1
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')
xtest = np.pi/2
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')
xtest = xtest+.1
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')
xtest = 2*np.pi-xtest
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')
xtest = xtest+.1
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')
plt.ylim(-1.5,1.5)
plt.xlim(-1,7)
###Output
_____no_output_____
###Markdown
The two sets of vertical lines are both separated by $0.1$. The probability $P(a < x < b)$ must equal the probability of $P( cos(b) < y < cos(a) )$. In this example there are two different values of $x$ that give the same $y$ (see green and red lines), so we need to take that into account. For now, let's just focus on the first part of the curve with $x<\pi$.So we can write (this is the important equation):\begin{equation}\int_a^b p_x(x) dx = \int_{y_b}^{y_a} p_y(y) dy \end{equation}where $y_a = \cos(a)$ and $y_b = \cos(b)$.and we can re-write the integral on the right by using a change of variables (pure calculus)\begin{equation}\int_a^b p_x(x) dx = \int_{y_b}^{y_a} p_y(y) dy = \int_a^b p_y(y(x)) \left| \frac{dy}{dx}\right| dx \end{equation}notice that the limits of integration and integration variable are the same for the left and right sides of the equation, so the integrands must be the same too. Therefore:\begin{equation}p_x(x) = p_y(y) \left| \frac{dy}{dx}\right| \end{equation}and equivalently\begin{equation}p_y(y) = p_x(x) \,/ \,\left| \, {dy}/{dx}\, \right | \end{equation}The factor $\left|\frac{dy}{dx} \right|$ is called a Jacobian. When it is large it is stretching the probability in $x$ over a large range of $y$, so it makes sense that it is in the denominator.
###Code
plt.plot((0.,1), (0,.3))
plt.plot((0.,1), (0,0), lw=2)
plt.plot((1.,1), (0,.3))
plt.ylim(-.1,.4)
plt.xlim(-.1,1.6)
plt.text(0.5,0.2, '1', color='b')
plt.text(0.2,0.03, 'x', color='black')
plt.text(0.5,-0.05, 'y=cos(x)', color='g')
plt.text(1.02,0.1, '$\sin(x)=\sqrt{1-y^2}$', color='r')
###Output
_____no_output_____
###Markdown
In our case:\begin{equation}\left|\frac{dy}{dx} \right| = \sin(x)\end{equation}Looking at the right-triangle above you can see $\sin(x)=\sqrt{1-y^2}$ and finally there will be an extra factor of 2 for $p_y(y)$ to take into account $x>\pi$. So we arrive at\begin{equation}p_y(y) = 2 \times \frac{1}{2 \pi} \frac{1}{\sin(x)} = \frac{1}{\pi} \frac{1}{\sin(\arccos(y))} = \frac{1}{\pi} \frac{1}{\sqrt{1-y^2}}\end{equation} Notice that when $y=\pm 1$ the pdf is diverging. This is called a [caustic](http://www.phikwadraat.nl/huygens_cusp_of_tea/) and you see them in your coffee and rainbows!| | ||---|---|| | | **Let's check our prediction**
###Code
counts, y_bins, patches = plt.hist(y, bins=50, normed=True, alpha=0.3)
pdf_y = (1./np.pi)/np.sqrt(1.-y_bins**2)
plt.plot(y_bins, pdf_y, c='r', lw=2)
plt.ylim(0,5)
plt.xlabel('y')
plt.ylabel('$p_y(y)$')
###Output
_____no_output_____
###Markdown
Perfect! A trick using the cumulative distribution function (cdf) to generate random numbersLet's consider a different variable transformation now -- it is a special one that we can use to our advantage. \begin{equation}y(x) = \textrm{cdf}(x) = \int_{-\infty}^x p_x(x') dx'\end{equation}Here's a plot of a distribution and cdf for a Gaussian.(NOte: the axes are different for the pdf and the cdf http://matplotlib.org/examples/api/two_scales.html
###Code
from scipy.stats import norm
x_for_plot = np.linspace(-3,3, 30)
fig, ax1 = plt.subplots()
ax1.plot(x_for_plot, norm.pdf(x_for_plot), c='b')
ax1.set_ylabel('p(x)', color='b')
for tl in ax1.get_yticklabels():
tl.set_color('b')
ax2 = ax1.twinx()
ax2.plot(x_for_plot, norm.cdf(x_for_plot), c='r')
ax2.set_ylabel('cdf(x)', color='r')
for tl in ax2.get_yticklabels():
tl.set_color('r')
###Output
_____no_output_____
###Markdown
Ok, so let's use our result about how distributions transform under a change of variables to predict the distribution of $y=cdf(x)$. We need to calculate \begin{equation}\frac{dy}{dx} = \frac{d}{dx} \int_{-\infty}^x p_x(x') dx'\end{equation}Just like particles and anti-particles, when derivatives meet anti-derivatives they annihilate. So $\frac{dy}{dx} = p_x(x)$, which shouldn't be a surprise.. the slope of the cdf is the pdf.So putting these together we find the distribution for $y$ is:\begin{equation}p_y(y) = p_x(x) \, / \, \frac{dy}{dx} = p_x(x) /p_x(x) = 1\end{equation}So it's just a uniform distribution from $[0,1]$, which is perfect for random numbers.We can turn this around and generate a uniformly random number between $[0,1]$, take the inverse of the cdf and we should have the distribution we want for $x$.Let's try it for a Gaussian. The inverse of the cdf for a Gaussian is called [ppf](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.norm.html)
###Code
norm.ppf.__doc__
#check it out
norm.cdf(0), norm.ppf(0.5)
###Output
_____no_output_____
###Markdown
Ok, let's use CDF trick to generate Normally-distributed (aka Gaussian-distributed) random numbers
###Code
rand_cdf = np.random.uniform(0,1,10000)
rand_norm = norm.ppf(rand_cdf)
_ = plt.hist(rand_norm, bins=30, normed=True, alpha=0.3)
plt.xlabel('x')
###Output
_____no_output_____ |
maps_hash/.ipynb_checkpoints/caching-checkpoint.ipynb | ###Markdown
Caching can be defined as the process of storing data into a temporary data storage to avoid recomputation or to avoid reading the data from a relatively slower part of memory again and again. Thus cachig serves as a fast "look-up" storage allowing programs to execute faster. Let's use caching to chalk out an efficient solution for a problem. Problem StatementA child is running up a staircase with and can hop either 1 step, 2 steps or 3 steps at a time. If the staircase has `n` steps, write a function to count the number of possible ways in which child can run up the stairs. For e.g. * `n == 1` then `answer = 1`* `n == 3` then `answer = 4` * `n == 5` then `answer = 13`
###Code
def staircase(n):
# Base Case - minimum steps possible and number of ways the child can climb them
# Inductive Hypothesis - ways to climb rest of the steps
# Inductive Step - use Inductive Hypothesis to formulate a solution
pass
def test_function(test_case):
answer = staircase(test_case[0])
if answer == test_case[1]:
print("Pass")
else:
print("Fail")
test_case = [4, 7]
test_function(test_case)
test_case = [5, 13]
test_function(test_case)
test_case = [3, 4]
test_function(test_case)
test_case = [20, 121415]
test_function(test_case)
###Output
_____no_output_____
###Markdown
Show Solution Problem StatementWhile using recursion for the above problem, you might have noticed a small problem with efficiency.Let's take a look at an example.* Say the total number of steps are `5`. This means that we will have to call at `(n=4), (n=3), and (n=2)`* To calculate the answer for `n=4`, we would have to call `(n=3), (n=2) and (n=1)`You can notice that even for a small number of staircases (here 5), we are calling `n=3` and `n=2` multiple times. Each time we call a method, additional time is required to calculate the solution. In contrast, instead of calling on a particular value of `n` again and again, we can calculate it once and store the result to speed up our program.Your job is to use any data-structure that you have used until now to write a faster implementation of the function you wrote earlier while using recursion.
###Code
def staircase(n):
pass
test_case = [4, 7]
test_function(test_case)
test_case = [5, 13]
test_function(test_case)
test_case = [3, 4]
test_function(test_case)
test_case = [20, 121415]
test_function(test_case)
###Output
_____no_output_____ |
Natural Language Processing Specialization/LSTMs_and_named_entity_recognition/C3_W3_Lecture_Notebook_Vanishing_Gradients.ipynb | ###Markdown
Vanishing Gradients : Ungraded Lecture NotebookIn this notebook you'll take another look at vanishing gradients, from an intuitive standpoint. BackgroundAdding layers to a neural network introduces multiplicative effects in both forward and backward propagation. The back prop in particular presents a problem as the gradient of activation functions can be very small. Multiplied together across many layers, their product can be vanishingly small! This results in weights not being updated in the front layers and training not progressing.Gradients of the sigmoid function, for example, are in the range 0 to 0.25. To calculate gradients for the front layers of a neural network the chain rule is used. This means that these tiny values are multiplied starting at the last layer, working backwards to the first layer, with the gradients shrinking exponentially at each step. Imports
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data, Activation & Gradient DataI'll start be creating some data, nothing special going on here. Just some values spread across the interval -5 to 5.* Try changing the range of values in the data to see how it impacts the plots that follow. ActivationThe example here is sigmoid() to squish the data x into the interval 0 to 1. GradientThis is the derivative of the sigmoid() activation function. It has a maximum of 0.25 at x = 0, the steepest point on the sigmoid plot.* Try changing the x value for finding the tangent line in the plot.
###Code
# Data
# Interval [-5, 5]
### START CODE HERE ###
x = np.linspace(-5, 5, 100) # try changing the range of values in the data. eg: (-100,100,1000)
### END CODE HERE ###
# Activation
# Interval [0, 1]
def sigmoid(x):
return 1 / (1 + np.exp(-x))
activations = sigmoid(x)
# Gradient
# Interval [0, 0.25]
def sigmoid_gradient(x):
return (x) * (1 - x)
gradients = sigmoid_gradient(activations)
# Plot sigmoid with tangent line
plt.plot(x, activations)
plt.title("Sigmoid Steepest Point")
plt.xlabel("x input data")
plt.ylabel("sigmoid(x)")
# Add the tangent line
### START CODE HERE ###
x_tan = 0 # x value to find the tangent. try different values within x declared above. eg: 2
### END CODE HERE ###
y_tan = sigmoid(x_tan) # y value
span = 1.7 # line span along x axis
data_tan = np.linspace(x_tan - span, x_tan + span) # x values to plot
gradient_tan = sigmoid_gradient(sigmoid(x_tan)) # gradient of the tangent
tan = y_tan + gradient_tan * (data_tan - x_tan) # y values to plot
plt.plot(x_tan, y_tan, marker="o", color="orange", label=True) # marker
plt.plot(data_tan, tan, linestyle="--", color="orange") # line
plt.show()
###Output
_____no_output_____
###Markdown
Plots Sub PlotsData values along the x-axis of the plots on the interval chosen for x, -5 to 5. Subplots:- x vs x- sigmoid of x- gradient of sigmoidNotice how the y axis keeps compressing from the left plot to the right plot. The interval range has shrunk from 10 to 1 to 0.25. How did this happen? As |x| gets larger the sigmoid approaches asymptotes at 0 and 1, and the sigmoid gradient shrinks towards 0.* Try changing the range of values in the code block above to see how it impacts the plots.
###Code
# Sub plots
fig, axs = plt.subplots(1, 3, figsize=(15, 4), sharex=True)
# X values
axs[0].plot(x, x)
axs[0].set_title("x values")
axs[0].set_ylabel("y=x")
axs[0].set_xlabel("x input data")
# Sigmoid
axs[1].plot(x, activations)
axs[1].set_title("sigmoid")
axs[1].set_ylabel("sigmoid")
axs[1].set_xlabel("x input data")
# Sigmoid gradient
axs[2].plot(x, gradients)
axs[2].set_title("sigmoid gradient")
axs[2].set_ylabel("gradient")
axs[2].set_xlabel("x input data")
fig.show()
###Output
_____no_output_____
###Markdown
Single PlotPutting all 3 series on a single plot can help visualize the compression. Notice how hard it is to interpret because sigmoid and sigmoid gradient are so small compared to the scale of the input data x.* Trying changing the plot ylim to zoom in.
###Code
# Single plot
plt.plot(x, x, label="data")
plt.plot(x, activations, label="sigmoid")
plt.plot(x, gradients, label="sigmoid gradient")
plt.legend(loc="upper left")
plt.title("Visualizing Compression")
plt.xlabel("x input data")
plt.ylabel("range")
### START CODE HERE ###
# plt.ylim(-.5, 1.5) # try shrinking the y axis limit for better visualization. eg: uncomment this line
### END CODE HERE ###
plt.show()
# Max, Min of each array
print("")
print("Max of x data :", np.max(x))
print("Min of x data :", np.min(x), "\n")
print("Max of sigmoid :", "{:.3f}".format(np.max(activations)))
print("Min of sigmoid :", "{:.3f}".format(np.min(activations)), "\n")
print("Max of gradients :", "{:.3f}".format(np.max(gradients)))
print("Min of gradients :", "{:.3f}".format(np.min(gradients)))
###Output
_____no_output_____
###Markdown
Numerical Impact Multiplication & DecayMultiplying numbers smaller than 1 results in smaller and smaller numbers. Below is an example that finds the gradient for an input x = 0 and multiplies it over n steps. Look how quickly it 'Vanishes' to almost zero. Yet sigmoid(x=0)=0.5 which has a sigmoid gradient of 0.25 and that happens to be the largest sigmoid gradient possible!(Note: This is NOT an implementation of back propagation.)* Try changing the number of steps n.* Try changing the input value x. Consider the impact on sigmoid and sigmoid gradient.
###Code
# Simulate decay
# Inputs
### START CODE HERE ###
n = 6 # number of steps : try changing this
x = 0 # value for input x : try changing this
### END CODE HERE ###
grad = sigmoid_gradient(sigmoid(x))
steps = np.arange(1, n + 1)
print("-- Inputs --")
print("steps :", n)
print("x value :", x)
print("sigmoid :", "{:.5f}".format(sigmoid(x)))
print("gradient :", "{:.5f}".format(grad), "\n")
# Loop to calculate cumulative total
print("-- Loop --")
vals = []
total_grad = 1 # initialize to 1 to satisfy first loop below
for s in steps:
total_grad = total_grad * grad
vals.append(total_grad)
print("step", s, ":", total_grad)
print("")
# Plot
plt.plot(steps, vals)
plt.xticks(steps)
plt.title("Multiplying Small Numbers")
plt.xlabel("Steps")
plt.ylabel("Cumulative Gradient")
plt.show()
###Output
-- Inputs --
steps : 6
x value : 0
sigmoid : 0.50000
gradient : 0.25000
-- Loop --
step 1 : 0.25
step 2 : 0.0625
step 3 : 0.015625
step 4 : 0.00390625
step 5 : 0.0009765625
step 6 : 0.000244140625
|
jupyter/Chapter09/kalman_constant_velocity.ipynb | ###Markdown
***Introduction to Radar Using Python and MATLAB*** Andy Harrison - Copyright (C) 2019 Artech House Kalman Filter with Constant Velocity*** Section 9.1.3.2 presents the multivariate Kalman filter, with a general framework given in Figure 9.11. This example illustrates Kalman filtering with a constant velocity model.*** Begin by setting the library path
###Code
import lib_path
###Output
_____no_output_____
###Markdown
Set the start time (s), end time (s) and time step (s)
###Code
start = 0.0
end = 20.0
step = 0.1
###Output
_____no_output_____
###Markdown
Calculate the number of updates and create the time array with the `linspace` routine from `scipy`
###Code
from numpy import linspace
number_of_updates = round( (end - start) / step) + 1
t, dt = linspace(start, end, number_of_updates, retstep=True)
###Output
_____no_output_____
###Markdown
Set the initial position (m)
###Code
px = 7.0
py = 11.0
pz = 21.0
###Output
_____no_output_____
###Markdown
Set the initial velocity (m/s)
###Code
vx = 10.0
vy = 20.0
vz = 15.0
###Output
_____no_output_____
###Markdown
Set the measurement noise (m^2) and process variance ( m^2, (m/s)^2, (m/s/s)^2)
###Code
measurement_noise_variance = 10.0
process_noise_variance = 1e-6
###Output
_____no_output_____
###Markdown
Create the target trajectory
###Code
from numpy import zeros
x_true = zeros([6, number_of_updates])
x = px + vx * t
y = py + vy * t
z = pz + vz * t
x_true[0] = x
x_true[1] = vx
x_true[2] = y
x_true[3] = vy
x_true[4] = z
x_true[5] = vz
###Output
_____no_output_____
###Markdown
Generate the measurement noise using the `random` routines from `scipy`
###Code
from numpy import random, sqrt
v = sqrt(measurement_noise_variance) * (random.rand(number_of_updates) - 0.5)
###Output
_____no_output_____
###Markdown
Initialize state and input control vector
###Code
from numpy import zeros_like
x = zeros(6)
u = zeros_like(x)
###Output
_____no_output_____
###Markdown
Initialize the covariance and control matrix
###Code
from numpy import eye
P = 1.0e3 * eye(6)
B = zeros_like(P)
###Output
_____no_output_____
###Markdown
Initialize measurement and process noise variance
###Code
R = measurement_noise_variance * eye(3)
Q = process_noise_variance * eye(6)
###Output
_____no_output_____
###Markdown
State transition matrix
###Code
A = eye(6)
A[0, 1] = dt
A[2, 3] = dt
A[4, 5] = dt
###Output
_____no_output_____
###Markdown
Measurement transition matrix
###Code
H = zeros([3, 6])
H[0, 0] = 1
H[1, 2] = 1
H[2, 4] = 1
###Output
_____no_output_____
###Markdown
Initialize the Kalman filter
###Code
from Libs.tracking import kalman
kf = kalman.Kalman(x, u, P, A, B, Q, H, R)
###Output
_____no_output_____
###Markdown
Generate the measurements
###Code
from numpy import matmul
z = [matmul(H, x_true[:, i]) + v[i] for i in range(number_of_updates)]
###Output
_____no_output_____
###Markdown
Update the filter for each measurement
###Code
kf.filter(z)
###Output
_____no_output_____
###Markdown
Display the results of the constant acceleration Kalman filter using the `matplotlib` routines
###Code
from matplotlib import pyplot as plt
# Set the figure size
plt.rcParams["figure.figsize"] = (15, 10)
# Position - X
plt.figure()
plt.plot(t, x_true[0, :], '', label='True')
plt.plot(t, [z[0] for z in z], ':', label='Measurement')
plt.plot(t, [x[0] for x in kf.state], '--', label='Filtered')
plt.ylabel('Position - X (m)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Position - Y
plt.figure()
plt.plot(t, x_true[2, :], '', label='True')
plt.plot(t, [z[1] for z in z], ':', label='Measurement')
plt.plot(t, [x[2] for x in kf.state], '--', label='Filtered')
plt.ylabel('Position - Y (m)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Position - Z
plt.figure()
plt.plot(t, x_true[4, :], '', label='True')
plt.plot(t, [z[2] for z in z], ':', label='Measurement')
plt.plot(t, [x[4] for x in kf.state], '--', label='Filtered')
plt.ylabel('Position - Z (m)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Velocity - X
plt.figure()
plt.plot(t, x_true[1, :], '', label='True')
plt.plot(t, [x[1] for x in kf.state], '--', label='Filtered')
plt.ylabel('Velocity - X (m/s)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Velocity - Y
plt.figure()
plt.plot(t, x_true[3, :], '', label='True')
plt.plot(t, [x[3] for x in kf.state], '--', label='Filtered')
plt.ylabel('Velocity - Y (m/s)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Velocity - Z
plt.figure()
plt.plot(t, x_true[5, :], '', label='True')
plt.plot(t, [x[5] for x in kf.state], '--', label='Filtered')
plt.ylabel('Velocity - Z (m/s)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Residual
plt.figure()
plt.plot(t, kf.residual, '')
plt.ylabel('Residual (m)', size=12)
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
###Output
_____no_output_____
###Markdown
***Introduction to Radar Using Python and MATLAB*** Andy Harrison - Copyright (C) 2019 Artech House Kalman Filter with Constant Velocity*** Section 9.1.3.2 presents the multivariate Kalman filter, with a general framework given in Figure 9.11. This example illustrates Kalman filtering with a constant velocity model.*** Begin by setting the library path
###Code
import lib_path
###Output
_____no_output_____
###Markdown
Set the start time (s), end time (s) and time step (s)
###Code
start = 0.0
end = 20.0
step = 0.1
###Output
_____no_output_____
###Markdown
Calculate the number of updates and create the time array with the `linspace` routine from `scipy`
###Code
from numpy import linspace
number_of_updates = round( (end - start) / step) + 1
t, dt = linspace(start, end, number_of_updates, retstep=True)
###Output
_____no_output_____
###Markdown
Set the initial position (m)
###Code
px = 7.0
py = 11.0
pz = 21.0
###Output
_____no_output_____
###Markdown
Set the initial velocity (m/s)
###Code
vx = 10.0
vy = 20.0
vz = 15.0
###Output
_____no_output_____
###Markdown
Set the measurement noise (m^2) and process variance ( m^2, (m/s)^2, (m/s/s)^2)
###Code
measurement_noise_variance = 10.0
process_noise_variance = 1e-6
###Output
_____no_output_____
###Markdown
Create the target trajectory
###Code
from numpy import zeros
x_true = zeros([6, number_of_updates])
x = px + vx * t
y = py + vy * t
z = pz + vz * t
x_true[0] = x
x_true[1] = vx
x_true[2] = y
x_true[3] = vy
x_true[4] = z
x_true[5] = vz
###Output
_____no_output_____
###Markdown
Generate the measurement noise using the `random` routines from `scipy`
###Code
from numpy import random, sqrt
v = sqrt(measurement_noise_variance) * (random.rand(number_of_updates) - 0.5)
###Output
_____no_output_____
###Markdown
Initialize state and input control vector
###Code
from numpy import zeros_like
x = zeros(6)
u = zeros_like(x)
###Output
_____no_output_____
###Markdown
Initialize the covariance and control matrix
###Code
from numpy import eye
P = 1.0e3 * eye(6)
B = zeros_like(P)
###Output
_____no_output_____
###Markdown
Initialize measurement and process noise variance
###Code
R = measurement_noise_variance * eye(3)
Q = process_noise_variance * eye(6)
###Output
_____no_output_____
###Markdown
State transition matrix
###Code
A = eye(6)
A[0, 1] = dt
A[2, 3] = dt
A[4, 5] = dt
###Output
_____no_output_____
###Markdown
Measurement transition matrix
###Code
H = zeros([3, 6])
H[0, 0] = 1
H[1, 2] = 1
H[2, 4] = 1
###Output
_____no_output_____
###Markdown
Initialize the Kalman filter
###Code
from Libs.tracking import kalman
kf = kalman.Kalman(x, u, P, A, B, Q, H, R)
###Output
_____no_output_____
###Markdown
Generate the measurements
###Code
from numpy import matmul
z = [matmul(H, x_true[:, i]) + v[i] for i in range(number_of_updates)]
###Output
_____no_output_____
###Markdown
Update the filter for each measurement
###Code
kf.filter(z)
###Output
_____no_output_____
###Markdown
Display the results of the constant acceleration Kalman filter using the `matplotlib` routines
###Code
from matplotlib import pyplot as plt
# Set the figure size
plt.rcParams["figure.figsize"] = (15, 10)
# Position - X
plt.figure()
plt.plot(t, x_true[0, :], '', label='True')
plt.plot(t, [z[0] for z in z], ':', label='Measurement')
plt.plot(t, [x[0] for x in kf.state], '--', label='Filtered')
plt.ylabel('Position - X (m)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Position - Y
plt.figure()
plt.plot(t, x_true[2, :], '', label='True')
plt.plot(t, [z[1] for z in z], ':', label='Measurement')
plt.plot(t, [x[2] for x in kf.state], '--', label='Filtered')
plt.ylabel('Position - Y (m)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Position - Z
plt.figure()
plt.plot(t, x_true[4, :], '', label='True')
plt.plot(t, [z[2] for z in z], ':', label='Measurement')
plt.plot(t, [x[4] for x in kf.state], '--', label='Filtered')
plt.ylabel('Position - Z (m)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Velocity - X
plt.figure()
plt.plot(t, x_true[1, :], '', label='True')
plt.plot(t, [x[1] for x in kf.state], '--', label='Filtered')
plt.ylabel('Velocity - X (m/s)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Velocity - Y
plt.figure()
plt.plot(t, x_true[3, :], '', label='True')
plt.plot(t, [x[3] for x in kf.state], '--', label='Filtered')
plt.ylabel('Velocity - Y (m/s)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Velocity - Z
plt.figure()
plt.plot(t, x_true[5, :], '', label='True')
plt.plot(t, [x[5] for x in kf.state], '--', label='Filtered')
plt.ylabel('Velocity - Z (m/s)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Residual
plt.figure()
plt.plot(t, kf.residual, '')
plt.ylabel('Residual (m)', size=12)
# Set the plot title and labels
plt.title('Kalman Filter', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
###Output
_____no_output_____ |
PHME_Notebook.ipynb | ###Markdown
Domain adaptation for fault diagnosis: A CWRU toy exampleThis is a toy DA example for the CWRU dataset. You can check [this paper](https://arxiv.org/pdf/1905.06004) for more details.Qin Wang @ ETH Zurich This notebook was used as part of my tutorial session for the European conference of the prognosticas and health management society (PHME21). Please cite our work if you find this notebook useful:```latex@inproceedings{wang2019domain, title={Domain adaptive transfer learning for fault diagnosis}, author={Wang, Qin and Michau, Gabriel and Fink, Olga}, booktitle={2019 Prognostics and System Health Management Conference (PHM-Paris)}, pages={279--285}, year={2019}, organization={IEEE}}@article{wang2020missing, title={Missing-class-robust domain adaptation by unilateral alignment}, author={Wang, Qin and Michau, Gabriel and Fink, Olga}, journal={IEEE Transactions on Industrial Electronics}, volume={68}, number={1}, pages={663--671}, year={2020}, publisher={IEEE}}``` Load Dependencies
###Code
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Activation, BatchNormalization, Dropout, Conv1D, Flatten, ReLU
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam, SGD
import tensorflow.keras.backend as K
###Output
_____no_output_____
###Markdown
Download the data
###Code
!wget https://github.com/qinenergy/DA-diagnosis/releases/download/data/X.npy
!wget https://github.com/qinenergy/DA-diagnosis/releases/download/data/X-test.npy
!wget https://github.com/qinenergy/DA-diagnosis/releases/download/data/y.npy
!wget https://github.com/qinenergy/DA-diagnosis/releases/download/data/y-test.npy
###Output
--2021-06-30 09:45:24-- https://qin.ee/bearings/X.npy
Resolving qin.ee (qin.ee)... 35.177.27.26
Connecting to qin.ee (qin.ee)|35.177.27.26|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 8192128 (7.8M) [application/octet-stream]
Saving to: ‘X.npy’
X.npy 100%[===================>] 7.81M 6.53MB/s in 1.2s
2021-06-30 09:45:26 (6.53 MB/s) - ‘X.npy’ saved [8192128/8192128]
--2021-06-30 09:45:26-- https://qin.ee/bearings/X-test.npy
Resolving qin.ee (qin.ee)... 35.177.27.26
Connecting to qin.ee (qin.ee)|35.177.27.26|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 8192128 (7.8M) [application/octet-stream]
Saving to: ‘X-test.npy’
X-test.npy 100%[===================>] 7.81M 6.48MB/s in 1.2s
2021-06-30 09:45:28 (6.48 MB/s) - ‘X-test.npy’ saved [8192128/8192128]
--2021-06-30 09:45:29-- https://qin.ee/bearings/y.npy
Resolving qin.ee (qin.ee)... 35.177.27.26
Connecting to qin.ee (qin.ee)|35.177.27.26|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16128 (16K) [application/octet-stream]
Saving to: ‘y.npy’
y.npy 100%[===================>] 15.75K --.-KB/s in 0.1s
2021-06-30 09:45:29 (123 KB/s) - ‘y.npy’ saved [16128/16128]
--2021-06-30 09:45:29-- https://qin.ee/bearings/y-test.npy
Resolving qin.ee (qin.ee)... 35.177.27.26
Connecting to qin.ee (qin.ee)|35.177.27.26|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16128 (16K) [application/octet-stream]
Saving to: ‘y-test.npy’
y-test.npy 100%[===================>] 15.75K --.-KB/s in 0.1s
2021-06-30 09:45:30 (122 KB/s) - ‘y-test.npy’ saved [16128/16128]
###Markdown
Data Loading Load source and target data + We have preprocessed the data for you, they are FFT preprocessed vibration data.+ Source is recorded under load 3 + Target is recorded under load 1+ Each contains 10 classes, out of which 9 are different faults
###Code
data_src = np.load("X.npy")[::10] #10% subset for faster try during the tutorial
label_src = np.load("y.npy")[::10]
data_tgt = np.load("X-test.npy")[::10]
label_tgt = np.load("y-test.npy")[::10]
print("Source dimension", data_src.shape, label_src.shape)
print("Target dimension", data_tgt.shape, label_tgt.shape)
###Output
Source dimension (200, 1024) (200,)
Target dimension (200, 1024) (200,)
###Markdown
Get the data FFT transformed
###Code
fft = lambda sig: abs(np.fft.fft(sig)[:len(sig)//2])
data_src_fft = np.array([fft(sig) for sig in data_src])
data_tgt_fft = np.array([fft(sig) for sig in data_tgt])
print("Source dimension", data_src_fft.shape, label_src.shape)
print("Target dimension", data_tgt_fft.shape, label_tgt.shape)
### Expand the last dimension for ease of feeding conv1d
data_src = np.expand_dims(data_src, axis=-1)
data_src_fft = np.expand_dims(data_src_fft, axis=-1)
data_tgt = np.expand_dims(data_tgt, axis=-1)
data_tgt_fft = np.expand_dims(data_tgt_fft, axis=-1)
print("Source dimension", data_src_fft.shape, data_src.shape)
print("Target dimension", data_tgt_fft.shape, data_tgt.shape)
###Output
Source dimension (200, 512, 1) (200, 1024, 1)
Target dimension (200, 512, 1) (200, 1024, 1)
###Markdown
Let's check the data first First let's visualize some samples from source.
###Code
# Have a taste of data, the given target data is in order..
for i in range(10):
plt.plot(data_tgt[20*i], label="class: " + str(i))
assert(label_tgt[20*i] == i)
plt.legend()
###Output
_____no_output_____
###Markdown
Then visulize the FFT-ed target signals.
###Code
for i in range(10):
plt.plot(data_tgt_fft[20*i], label="class: " + str(i))
assert(label_tgt[20*i] == i)
plt.legend()
###Output
_____no_output_____
###Markdown
**Example** A source-only baseline model is provided.
###Code
def feature_extractor(x):
h = Conv1D(10, 3, padding='same', activation="relu")(x)
h = Dropout(0.5)(h)
h = Conv1D(10, 3, padding='same', activation="relu")(h)
h = Dropout(0.5)(h)
h = Conv1D(10, 3, padding='same', activation="relu")(h)
h = Dropout(0.5)(h)
h = Flatten()(h)
h = Dense(256, activation='relu')(h)
return h
def clf(x):
h = Dense(256, activation='relu')(x)
h = Dense(10, activation='softmax', name="clf")(h)
return h
def baseline():
input_dim = 512
inputs = Input(shape=(input_dim, 1))
features = feature_extractor(inputs)
logits = clf(features)
baseline_model = Model(inputs=inputs, outputs=logits)
adam = Adam(lr=0.0001)
baseline_model.compile(optimizer=adam,
loss=['sparse_categorical_crossentropy'], metrics=['accuracy'])
return baseline_model
# Set seed
accs = []
import random as python_random
for i in range(10):
python_random.seed(i)
np.random.seed(i)
tf.random.set_seed(i)
baseline_model = baseline()
# Run training
baseline_model.fit(data_src_fft, label_src, batch_size=128, epochs=400, shuffle=True, verbose=False)
# Run evaluating
score, acc = baseline_model.evaluate(data_tgt_fft, label_tgt, batch_size=200)
print("Accuracy for the baseline model on target data is", acc)
accs.append(acc)
print("ten run mean", np.mean(accs))
###Output
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:375: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
"The `lr` argument is deprecated, use `learning_rate` instead.")
###Markdown
Task: Domain Adversarial Training!Let's now add domain adversarial ability to the baseline model. First we need to define the gradient reverse layer, because it is a custom op. **TODO** Gradient Reverse Layer(GRL) Layer ---Define a custom keras layer GradReverse which can change the normal gradient to our expected reserved ones.Check the [GRL](https://arxiv.org/pdf/1409.7495) paper whenever necessary.Since we are customizing the layer, we will need some backend ops to reverse the gradient. Hint: For example, you can use [this decorator](https://www.tensorflow.org/api_docs/python/tf/custom_gradient) to customize the gradient.
###Code
@tf.custom_gradient
def grad_reverse(x):
y = tf.identity(x)
def custom_grad(dy):
return -dy
return y, custom_grad
class GradReverse(tf.keras.layers.Layer):
def __init__(self):
super().__init__()
def call(self, x):
return grad_reverse(x)
###Output
_____no_output_____
###Markdown
We now provide you with a simple discriminator which you could use for the domain adversarial training.
###Code
def discriminator(x):
h = Dense(1024, activation='relu')(x)
h = Dense(1024, activation='relu')(h)
h = Dense(2, activation='softmax', name="dis")(h)
return h
###Output
_____no_output_____
###Markdown
**TODO** Training strategyNow it's time to add the discriminator to the network and see how it works. We provide you with some hints in the comments but feel free to write the code in your own ways as long as it is a correct alignment.
###Code
def grl():
""" GRL strategy
returns: the classification branch, the discriminator branch
"""
input_dim = 512
### Define inputs
inputs = Input(shape=(input_dim, 1))
### Get features
features = feature_extractor(inputs)
### Get classification logits
logits = clf(features)
### Define the classification branch model
clf_branch = Model(inputs=inputs, outputs=logits)
adam = Adam(lr=0.0001)
clf_branch.compile(optimizer=adam,
loss={'clf': 'sparse_categorical_crossentropy'}, metrics=['accuracy'])
### Define the classification branch model
features_rev = GradReverse()(features)
logits_da = discriminator(features_rev)
da_branch = Model(inputs=inputs, outputs=logits_da)
adam_da = Adam(lr=0.0001)
da_branch.compile(optimizer=adam_da,
loss={'dis': 'sparse_categorical_crossentropy'}, metrics=['accuracy'])
return clf_branch, da_branch
###Output
_____no_output_____
###Markdown
**TODO** Make it run! Add the training code for the above model.
###Code
### Some constants
NUM_EPOCH = 400
BATCH_SIZE = 128
DATASET_SIZE = 200
accs = []
import random as python_random
for i in range(10):
python_random.seed(i)
np.random.seed(i)
tf.random.set_seed(i)
clf_branch, da_branch = grl()
### Iterate over
for i in range(NUM_EPOCH * (DATASET_SIZE // BATCH_SIZE)):
### Randomly fetch training data
idx_src = np.random.choice(DATASET_SIZE, size=BATCH_SIZE, replace=False)
idx_tgt = np.random.choice(DATASET_SIZE, size=BATCH_SIZE, replace=False)
batch_src, batch_y = data_src_fft[idx_src], label_src[idx_src]
### We don't use any label from target domain
batch_tgt = data_tgt_fft[idx_tgt]
########## the training code for clf_branch ###################
clf_branch.train_on_batch(batch_src, batch_y)
########## the training code for discriminator branch #########
dis_y = np.concatenate([np.zeros_like(batch_y), np.ones_like(batch_y)], axis=0)
da_branch.train_on_batch(np.concatenate([batch_src, batch_tgt], axis=0), dis_y)
### Final results
score, acc = clf_branch.evaluate(data_tgt_fft, label_tgt, batch_size=200)
print("Final Accuracy", acc)
accs.append(acc)
print("ten run mean", np.mean(accs))
###Output
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:375: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
"The `lr` argument is deprecated, use `learning_rate` instead.")
|
Caffe2_Pretrained_Models.ipynb | ###Markdown
you need to read Model_Quickload.ipynb before run this notebook.caffe2_tutorials is required to be uploaded in your Google Drive. Loading Pre-Trained Models DescriptionIn this tutorial, we will use the pre-trained `squeezenet` model from the [ModelZoo](https://github.com/caffe2/caffe2/wiki/Model-Zoo) to classify our own images. As input, we will provide the path (or URL) to an image we want to classify. It will also be helpful to know the [ImageNet object code](https://gist.githubusercontent.com/aaronmarkham/cd3a6b6ac071eca6f7b4a6e40e6038aa/raw/9edb4038a37da6b5a44c3b5bc52e448ff09bfe5b/alexnet_codes) for the image so we can verify our results. The 'object code' is nothing more than the integer label for the class used during training, for example "985" is the code for the class "daisy". Note, although we are using squeezenet here, this tutorial serves as a somewhat universal method for running inference on pretrained models.If you came from the [Image Pre-Processing Tutorial](https://caffe2.ai/docs/tutorial-image-pre-processing.html), you will see that we are using rescale and crop functions to prep the image, as well as reformatting the image to be CHW, BGR, and finally NCHW. We also correct for the image mean by either using the calculated mean from a provided npy file, or statically removing 128 as a placeholder average.Hopefully, you will find that loading pre-trained models is simple and syntactically concise. From a high level, these are the three required steps for running inference on a pretrained model:1. Read the init and predict protobuf (.pb) files of the pretrained model with open("init_net.pb", "rb") as f: init_net = f.read() with open("predict_net.pb", "rb") as f: predict_net = f.read() 2. Initialize a Predictor in your workspace with the blobs from the protobufs p = workspace.Predictor(init_net, predict_net)3. Run the net on some data and get the (softmax) results! results = p.run({'data': img})Note, assuming the last layer of the network is a softmax layer, the results come back as a multidimensional array of probabilities with length equal to the number of classes that the model was trained on. The probabilities may be indexed by the object code (integer type), so if you know the object code you can index the results array at that index to view the network's confidence that the input image is of that class.**Model Download Options**Although we will use `squeezenet` here, you can check out the [Model Zoo for pre-trained models](https://github.com/caffe2/caffe2/wiki/Model-Zoo) to browse/download a variety of pretrained models, or you can use Caffe2's `caffe2.python.models.download` module to easily acquire pre-trained models from [Github caffe2/models](http://github.com/caffe2/models). For our purposes, we will use the `models.download` module to download `squeezenet` into the `/caffe2/python/models` folder of our local Caffe2 installation with the following command:```python -m caffe2.python.models.download -i squeezenet```If the above download worked then you should have a directory named squeezenet in your `/caffe2/python/models` folder that contains `init_net.pb` and `predict_net.pb`. Note, if you do not use the `-i` flag, the model will be downloaded to your CWD, however it will still be a directory named squeezenet containing two protobuf files. Alternatively, if you wish to download all of the models, you can clone the entire repo using: ```git clone https://github.com/caffe2/models``` Code Before we start, lets take care of the required imports.
###Code
from google.colab import drive
drive.mount('/content/drive')
!git clone --recursive https://github.com/caffe2/tutorials caffe2_tutorials
%cd /content/drive/My Drive/caffe2_tutorials
!pip3 install torch torchvision
###Output
Requirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (1.5.0+cu101)
Requirement already satisfied: torchvision in /usr/local/lib/python3.6/dist-packages (0.6.0+cu101)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch) (1.18.3)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch) (0.16.0)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision) (7.0.0)
###Markdown
run the code below to download the pretrained model of squeezenet.
###Code
!python -m caffe2.python.models.download -i squeezenet
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
%matplotlib inline
from caffe2.proto import caffe2_pb2
import numpy as np
import skimage.io
import skimage.transform
from matplotlib import pyplot
import os
from caffe2.python import core, workspace, models
import urllib2
import operator
print("Required modules imported.")
###Output
Required modules imported.
###Markdown
InputsHere, we will specify the inputs to be used for this run, including the input image, the model location, the mean file (optional), the required size of the image, and the location of the label mapping file.
###Code
# Configuration --- Change to your setup and preferences!
# This directory should contain the models downloaded from the model zoo. To run this
# tutorial, make sure there is a 'squeezenet' directory at this location that
# contains both the 'init_net.pb' and 'predict_net.pb'
CAFFE_MODELS = 'caffe2/python/models'
# Some sample images you can try, or use any URL to a regular image.
# IMAGE_LOCATION = "https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Whole-Lemon.jpg/1235px-Whole-Lemon.jpg"
# IMAGE_LOCATION = "https://upload.wikimedia.org/wikipedia/commons/7/7b/Orange-Whole-%26-Split.jpg"
# IMAGE_LOCATION = "https://upload.wikimedia.org/wikipedia/commons/a/ac/Pretzel.jpg"
# IMAGE_LOCATION = "https://cdn.pixabay.com/photo/2015/02/10/21/28/flower-631765_1280.jpg"
IMAGE_LOCATION = "images/flower.jpg"
# codes - these help decypher the output and source from a list from ImageNet's object codes
# to provide an result like "tabby cat" or "lemon" depending on what's in the picture
# you submit to the CNN.
codes = "https://gist.githubusercontent.com/aaronmarkham/cd3a6b6ac071eca6f7b4a6e40e6038aa/raw/9edb4038a37da6b5a44c3b5bc52e448ff09bfe5b/alexnet_codes"
print("Config set!")
###Output
Config set!
###Markdown
Image PreprocessingNow that we have our inputs specified and verified the existance of the input network, we can load the image and pre-processing the image for ingestion into a Caffe2 convolutional neural network! This is a very important step as the trained CNN requires a specifically sized input image whose values are from a particular distribution.
###Code
# Function to crop the center cropX x cropY pixels from the input image
def crop_center(img,cropx,cropy):
y,x,c = img.shape
startx = x//2-(cropx//2)
starty = y//2-(cropy//2)
return img[starty:starty+cropy,startx:startx+cropx]
# Function to rescale the input image to the desired height and/or width. This function will preserve
# the aspect ratio of the original image while making the image the correct scale so we can retrieve
# a good center crop. This function is best used with center crop to resize any size input images into
# specific sized images that our model can use.
def rescale(img, input_height, input_width):
# Get original aspect ratio
aspect = img.shape[1]/float(img.shape[0])
if(aspect>1):
# landscape orientation - wide image
res = int(aspect * input_height)
imgScaled = skimage.transform.resize(img, (input_width, res))
if(aspect<1):
# portrait orientation - tall image
res = int(input_width/aspect)
imgScaled = skimage.transform.resize(img, (res, input_height))
if(aspect == 1):
imgScaled = skimage.transform.resize(img, (input_width, input_height))
return imgScaled
# Load the image as a 32-bit float
# Note: skimage.io.imread returns a HWC ordered RGB image of some size
INPUT_IMAGE_SIZE = 227
img = skimage.img_as_float(skimage.io.imread(IMAGE_LOCATION)).astype(np.float32)
print("Original Image Shape: " , img.shape)
# Rescale the image to comply with our desired input size. This will not make the image 227x227
# but it will make either the height or width 227 so we can get the ideal center crop.
img = rescale(img, INPUT_IMAGE_SIZE, INPUT_IMAGE_SIZE)
print("Image Shape after rescaling: " , img.shape)
pyplot.figure()
pyplot.imshow(img)
pyplot.title('Rescaled image')
# Crop the center 227x227 pixels of the image so we can feed it to our model
mean=128
img = crop_center(img, INPUT_IMAGE_SIZE, INPUT_IMAGE_SIZE)
print("Image Shape after cropping: " , img.shape)
pyplot.figure()
pyplot.imshow(img)
pyplot.title('Center Cropped')
# switch to CHW (HWC --> CHW)
img = img.swapaxes(1, 2).swapaxes(0, 1)
print("CHW Image Shape: " , img.shape)
pyplot.figure()
for i in range(3):
# For some reason, pyplot subplot follows Matlab's indexing
# convention (starting with 1). Well, we'll just follow it...
pyplot.subplot(1, 3, i+1)
pyplot.imshow(img[i])
pyplot.axis('off')
pyplot.title('RGB channel %d' % (i+1))
# switch to BGR (RGB --> BGR)
img = img[(2, 1, 0), :, :]
# remove mean for better results
img = img * 255 - mean
# add batch size axis which completes the formation of the NCHW shaped input that we want
img = img[np.newaxis, :, :, :].astype(np.float32)
print("NCHW image (ready to be used as input): ", img.shape)
###Output
Original Image Shape: (751, 1280, 3)
Image Shape after rescaling: (227, 386, 3)
Image Shape after cropping: (227, 227, 3)
CHW Image Shape: (3, 227, 227)
NCHW image (ready to be used as input): (1, 3, 227, 227)
###Markdown
Prepare the CNN and run the net!Now that the image is ready to be ingested by the CNN, let's open the protobufs, load them into the workspace, and run the net.
###Code
# when squeezenet is to be used.
from caffe2.python.models import squeezenet as mynet
init_net = mynet.init_net
predict_net = mynet.predict_net
# Initialize the predictor from the input protobufs
p = workspace.Predictor(init_net, predict_net)
# Run the net and return prediction
results = p.run({'data': img})
# Turn it into something we can play with and examine which is in a multi-dimensional array
results = np.asarray(results)
print("results shape: ", results.shape)
# Quick way to get the top-1 prediction result
# Squeeze out the unnecessary axis. This returns a 1-D array of length 1000
preds = np.squeeze(results)
# Get the prediction and the confidence by finding the maximum value and index of maximum value in preds array
curr_pred, curr_conf = max(enumerate(preds), key=operator.itemgetter(1))
print("Prediction: ", curr_pred)
print("Confidence: ", curr_conf)
###Output
results shape: (1, 1, 1000, 1, 1)
Prediction: 985
Confidence: 0.98222685
###Markdown
Process ResultsRecall ImageNet is a 1000 class dataset and observe that it is no coincidence that the third axis of results is length 1000. This axis is holding the probability for each category in the pre-trained model. So when you look at the results array at a specific index, the number can be interpreted as the probability that the input belongs to the class corresponding to that index. Now that we have run the predictor and collected the results, we can interpret them by matching them to their corresponding english labels.
###Code
# the rest of this is digging through the results
results = np.delete(results, 1)
index = 0
highest = 0
arr = np.empty((0,2), dtype=object)
arr[:,0] = int(10)
arr[:,1:] = float(10)
for i, r in enumerate(results):
# imagenet index begins with 1!
i=i+1
arr = np.append(arr, np.array([[i,r]]), axis=0)
if (r > highest):
highest = r
index = i
# top N results
N = 5
topN = sorted(arr, key=lambda x: x[1], reverse=True)[:N]
print("Raw top {} results: {}".format(N,topN))
# Isolate the indexes of the top-N most likely classes
topN_inds = [int(x[0]) for x in topN]
print("Top {} classes in order: {}".format(N,topN_inds))
# Now we can grab the code list and create a class Look Up Table
response = urllib2.urlopen(codes)
class_LUT = []
for line in response:
code, result = line.partition(":")[::2]
code = code.strip()
result = result.replace("'", "")
if code.isdigit():
class_LUT.append(result.split(",")[0][1:])
# For each of the top-N results, associate the integer result with an actual class
for n in topN:
print("Model predicts '{}' with {}% confidence".format(class_LUT[int(n[0])],float("{0:.2f}".format(n[1]*100))))
###Output
Raw top 5 results: [array([985.0, 0.9822268486022949], dtype=object), array([309.0, 0.01194374542683363], dtype=object), array([946.0, 0.004810206592082977], dtype=object), array([325.0, 0.00034070960828103125], dtype=object), array([944.0, 0.00023906711430754513], dtype=object)]
Top 5 classes in order: [985, 309, 946, 325, 944]
Model predicts 'daisy' with 98.22% confidence
Model predicts 'bee' with 1.19% confidence
Model predicts 'cardoon' with 0.48% confidence
Model predicts 'sulphur butterfly' with 0.03% confidence
Model predicts 'artichoke' with 0.02% confidence
###Markdown
Feeding Larger BatchesAbove is an example of how to feed one image at a time. We can achieve higher throughput if we feed multiple images at a time in a single batch. Recall, the data fed into the classifier is in 'NCHW' order, so to feed multiple images, we will expand the 'N' axis.
###Code
# List of input images to be fed
images = ["images/cowboy-hat.jpg",
"images/cell-tower.jpg",
"images/Ducreux.jpg",
"images/pretzel.jpg",
"images/orangutan.jpg",
"images/aircraft-carrier.jpg",
"images/cat.jpg"]
# Allocate space for the batch of formatted images
NCHW_batch = np.zeros((len(images),3,227,227))
print ("Batch Shape: ",NCHW_batch.shape)
# For each of the images in the list, format it and place it in the batch
for i,curr_img in enumerate(images):
img = skimage.img_as_float(skimage.io.imread(curr_img)).astype(np.float32)
img = rescale(img, 227, 227)
img = crop_center(img, 227, 227)
img = img.swapaxes(1, 2).swapaxes(0, 1)
img = img[(2, 1, 0), :, :]
img = img * 255 - mean
NCHW_batch[i] = img
print("NCHW image (ready to be used as input): ", NCHW_batch.shape)
# Run the net on the batch
results = p.run([NCHW_batch.astype(np.float32)])
# Turn it into something we can play with and examine which is in a multi-dimensional array
results = np.asarray(results)
# Squeeze out the unnecessary axis
preds = np.squeeze(results)
print("Squeezed Predictions Shape, with batch size {}: {}".format(len(images),preds.shape))
# Describe the results
for i,pred in enumerate(preds):
print("Results for: '{}'".format(images[i]))
# Get the prediction and the confidence by finding the maximum value
# and index of maximum value in preds array
curr_pred, curr_conf = max(enumerate(pred), key=operator.itemgetter(1))
print("\tPrediction: ", curr_pred)
print("\tClass Name: ", class_LUT[int(curr_pred)])
print("\tConfidence: ", curr_conf)
###Output
Batch Shape: (7, 3, 227, 227)
NCHW image (ready to be used as input): (7, 3, 227, 227)
Squeezed Predictions Shape, with batch size 7: (7, 1000)
Results for: 'images/cowboy-hat.jpg'
Prediction: 515
Class Name: cowboy hat
Confidence: 0.8500884
Results for: 'images/cell-tower.jpg'
Prediction: 645
Class Name: maypole
Confidence: 0.18584298
Results for: 'images/Ducreux.jpg'
Prediction: 568
Class Name: fur coat
Confidence: 0.10253151
Results for: 'images/pretzel.jpg'
Prediction: 932
Class Name: pretzel
Confidence: 0.99962187
Results for: 'images/orangutan.jpg'
Prediction: 365
Class Name: orangutan
Confidence: 0.99200517
Results for: 'images/aircraft-carrier.jpg'
Prediction: 403
Class Name: aircraft carrier
Confidence: 0.9998778
Results for: 'images/cat.jpg'
Prediction: 281
Class Name: tabby
Confidence: 0.5133139
|
assignment_1/svm.ipynb | ###Markdown
Multiclass Support Vector Machine exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*In this exercise you will: - implement a fully-vectorized **loss function** for the SVM- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** using numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
###Code
# Run some setup code for this notebook.
from __future__ import print_function
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the
# notebook rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
CIFAR-10 Data Loading and Preprocessing
###Code
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Split the data into train, val, and test sets. In addition we will
# create a small development set as a subset of the training data;
# we can use this for development so our code runs faster.
num_training = 49000
num_validation = 1000
num_test = 1000
num_dev = 500
# Our validation set will be num_validation points from the original
# training set.
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# Our training set will be the first num_train points from the original
# training set.
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# We will also make a development set, which is a small subset of
# the training set.
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# We use the first num_test points of the original test set as our
# test set.
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# As a sanity check, print out the shapes of the data
print('Training data shape: ', X_train.shape)
print('Validation data shape: ', X_val.shape)
print('Test data shape: ', X_test.shape)
print('dev data shape: ', X_dev.shape)
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print(mean_image[:10]) # print a few of the elements
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image
plt.show()
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
print(X_train.shape, X_val.shape, X_test.shape, X_dev.shape)
###Output
(49000, 3073) (1000, 3073) (1000, 3073) (500, 3073)
###Markdown
SVM ClassifierYour code for this section will all be written inside **cs231n/classifiers/linear_svm.py**. As you can see, we have prefilled the function `compute_loss_naive` which uses for loops to evaluate the multiclass SVM loss function.
###Code
# Evaluate the naive implementation of the loss we provided for you:
from cs231n.classifiers.linear_svm import svm_loss_naive
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(3073, 10) * 0.0001
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.000005)
print('loss: %f' % (loss, ))
###Output
loss: 8.832528
###Markdown
The `grad` returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function `svm_loss_naive`. You will find it helpful to interleave your new code inside the existing function.To check that you have correctly implemented the gradient correctly, you can numerically estimate the gradient of the loss function and compare the numeric estimate to the gradient that you computed. We have provided code that does this for you:
###Code
# Once you've implemented the gradient, recompute it with the code below
# and gradient check it with the function we provided for you
# Compute the loss and its gradient at W.
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.0)
# Numerically compute the gradient along several randomly chosen dimensions, and
# compare them with your analytically computed gradient. The numbers should match
# almost exactly along all dimensions.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad)
# do the gradient check once again with regularization turned on
# you didn't forget the regularization gradient did you?
loss, grad = svm_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad)
###Output
numerical: 21.512101 analytic: 21.512101, relative error: 9.679084e-12
numerical: 25.957882 analytic: 25.957882, relative error: 1.242703e-11
numerical: 1.106131 analytic: 1.106131, relative error: 1.344887e-10
numerical: 6.769462 analytic: 6.769462, relative error: 6.959700e-11
numerical: 39.522899 analytic: 39.522899, relative error: 5.019458e-13
numerical: -3.574877 analytic: -3.574877, relative error: 9.355330e-11
numerical: -7.532184 analytic: -7.532184, relative error: 2.726241e-11
numerical: -1.533530 analytic: -1.533530, relative error: 1.473582e-10
numerical: 18.725500 analytic: 18.725500, relative error: 2.143069e-11
numerical: -5.819440 analytic: -5.819440, relative error: 4.704975e-11
numerical: 10.282777 analytic: 10.295436, relative error: 6.151924e-04
numerical: 4.250515 analytic: 4.251282, relative error: 9.018494e-05
numerical: 17.140006 analytic: 17.137723, relative error: 6.658625e-05
numerical: 27.993182 analytic: 28.001672, relative error: 1.516246e-04
numerical: -1.482963 analytic: -1.480567, relative error: 8.084318e-04
numerical: -29.266280 analytic: -29.265626, relative error: 1.116813e-05
numerical: -32.735101 analytic: -32.738884, relative error: 5.778138e-05
numerical: 14.903538 analytic: 14.905374, relative error: 6.157451e-05
numerical: -29.382575 analytic: -29.382495, relative error: 1.361341e-06
numerical: 5.275049 analytic: 5.269249, relative error: 5.500113e-04
###Markdown
Inline Question 1:It is possible that once in a while a dimension in the gradcheck will not match exactly. What could such a discrepancy be caused by? Is it a reason for concern? What is a simple example in one dimension where a gradient check could fail? *Hint: the SVM loss function is not strictly speaking differentiable***Your Answer:**1. What could such a discrepancy be caused by? - kinks in loss function2. Is it a reason for concern? - I don't know3. What is a simple example in one dimension where a gradient check could fail? - Could fail where a particular L_i meets L_(i+1) where 1 <= i <= N-1
###Code
# Next implement the function svm_loss_vectorized; for now only compute the loss;
# we will implement the gradient in a moment.
tic = time.time()
loss_naive, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.linear_svm import svm_loss_vectorized
tic = time.time()
loss_vectorized, _ = svm_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# The losses should match but your vectorized implementation should be much faster.
print('difference: %f' % (loss_naive - loss_vectorized))
# Complete the implementation of svm_loss_vectorized, and compute the gradient
# of the loss function in a vectorized way.
# The naive implementation and the vectorized implementation should match, but
# the vectorized version should still be much faster.
tic = time.time()
_, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Naive loss and gradient: computed in %fs' % (toc - tic))
tic = time.time()
_, grad_vectorized = svm_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Vectorized loss and gradient: computed in %fs' % (toc - tic))
# The loss is a single number, so it is easy to compare the values computed
# by the two implementations. The gradient on the other hand is a matrix, so
# we use the Frobenius norm to compare them.
difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('difference: %f' % difference)
###Output
Naive loss and gradient: computed in 0.160905s
Vectorized loss and gradient: computed in 0.004000s
difference: 0.000000
###Markdown
Stochastic Gradient DescentWe now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss.
###Code
# In the file linear_classifier.py, implement SGD in the function
# LinearClassifier.train() and then run it with the code below.
from cs231n.classifiers import LinearSVM
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-7, reg=2.5e4,
num_iters=1500, verbose=True)
toc = time.time()
print('That took %fs' % (toc - tic))
# A useful debugging strategy is to plot the loss as a function of
# iteration number:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
# Write the LinearSVM.predict function and evaluate the performance on both the
# training and validation set
y_train_pred = svm.predict(X_train)
print('training accuracy: %f' % (np.mean(y_train == y_train_pred), ))
y_val_pred = svm.predict(X_val)
print('validation accuracy: %f' % (np.mean(y_val == y_val_pred), ))
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of about 0.4 on the validation set.
learning_rates = [1e-8, 1e-7, 5e-5]
regularization_strengths = [5e5, 2.5e4, 5e4]
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.
################################################################################
# TODO: #
# Write code that chooses the best hyperparameters by tuning on the validation #
# set. For each combination of hyperparameters, train a linear SVM on the #
# training set, compute its accuracy on the training and validation sets, and #
# store these numbers in the results dictionary. In addition, store the best #
# validation accuracy in best_val and the LinearSVM object that achieves this #
# accuracy in best_svm. #
# #
# Hint: You should use a small value for num_iters as you develop your #
# validation code so that the SVMs don't take much time to train; once you are #
# confident that your validation code works, you should rerun the validation #
# code with a larger value for num_iters. #
################################################################################
best_learning_rate = -1
best_reg_str = -1
for learning_rate in learning_rates:
for reg_str in regularization_strengths:
linear_svm = LinearSVM()
linear_svm.train(X_train, y_train, learning_rate=learning_rate, reg=reg_str, num_iters=500)
y_train_pred = linear_svm.predict(X_train)
train_acc = np.mean(y_train_pred == y_train)
y_val_pred = linear_svm.predict(X_val)
val_acc = np.mean(y_val_pred == y_val)
results[(learning_rate, reg_str)] = (train_acc, val_acc)
if val_acc > best_val:
best_learning_rate = learning_rate
best_reg_str = reg_str
best_val = val_acc
best_svm = linear_svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('linear SVM on raw pixels final test set accuracy: %f' % test_accuracy)
# Visualize the learned weights for each class.
# Depending on your choice of learning rate and regularization strength, these may
# or may not be nice to look at.
w = best_svm.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____
###Markdown
Multiclass Support Vector Machine exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*In this exercise you will: - implement a fully-vectorized **loss function** for the SVM- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** using numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
###Code
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
# This is a bit of magic to make matplotlib figures appear inline in the
# notebook rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
CIFAR-10 Data Loading and Preprocessing
###Code
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Split the data into train, val, and test sets. In addition we will
# create a small development set as a subset of the training data;
# we can use this for development so our code runs faster.
num_training = 49000
num_validation = 1000
num_test = 1000
num_dev = 500
# Our validation set will be num_validation points from the original
# training set.
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# Our training set will be the first num_train points from the original
# training set.
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# We will also make a development set, which is a small subset of
# the training set.
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# We use the first num_test points of the original test set as our
# test set.
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# As a sanity check, print out the shapes of the data
print('Training data shape: ', X_train.shape)
print('Validation data shape: ', X_val.shape)
print('Test data shape: ', X_test.shape)
print('dev data shape: ', X_dev.shape)
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print(mean_image[:10]) # print a few of the elements
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image
plt.show()
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
print(X_train.shape, X_val.shape, X_test.shape, X_dev.shape)
###Output
(49000, 3073) (1000, 3073) (1000, 3073) (500, 3073)
###Markdown
SVM ClassifierYour code for this section will all be written inside **cs231n/classifiers/linear_svm.py**. As you can see, we have prefilled the function `compute_loss_naive` which uses for loops to evaluate the multiclass SVM loss function.
###Code
# Evaluate the naive implementation of the loss we provided for you:
from cs231n.classifiers.linear_svm import svm_loss_naive, svm_loss_vectorized
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(3073, 10) * 0.0001
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.000005)
print('loss: %f' % (loss, ))
###Output
loss: 9.239406
###Markdown
The `grad` returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function `svm_loss_naive`. You will find it helpful to interleave your new code inside the existing function.To check that you have correctly implemented the gradient correctly, you can numerically estimate the gradient of the loss function and compare the numeric estimate to the gradient that you computed. We have provided code that does this for you:
###Code
# Once you've implemented the gradient, recompute it with the code below
# and gradient check it with the function we provided for you
# Compute the loss and its gradient at W.
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.0)
# Numerically compute the gradient along several randomly chosen dimensions, and
# compare them with your analytically computed gradient. The numbers should match
# almost exactly along all dimensions.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad)
print('=' * 20, "Loss with regularization: ")
# do the gradient check once again with regularization turned on
# you didn't forget the regularization gradient did you?
loss, grad = svm_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad)
###Output
numerical: -6.741442 analytic: -6.741442, relative error: 2.828434e-11
numerical: -14.275176 analytic: -14.275176, relative error: 3.849968e-11
numerical: 13.487011 analytic: 13.487011, relative error: 3.661542e-11
numerical: -4.334114 analytic: -4.334114, relative error: 1.647401e-11
numerical: -13.386868 analytic: -13.386868, relative error: 5.152109e-12
numerical: 0.971670 analytic: 0.971670, relative error: 3.123937e-10
numerical: -18.612078 analytic: -18.612078, relative error: 1.413941e-11
numerical: 27.261524 analytic: 27.261524, relative error: 9.337733e-12
numerical: -1.926872 analytic: -1.926872, relative error: 8.567912e-11
numerical: -16.559478 analytic: -16.559478, relative error: 1.512204e-12
==================== Loss with regularization:
numerical: -36.075382 analytic: -36.075382, relative error: 3.802422e-12
numerical: 22.583156 analytic: 22.583156, relative error: 7.815428e-12
numerical: 18.050644 analytic: 18.050644, relative error: 5.186088e-12
numerical: -23.784526 analytic: -23.784526, relative error: 8.224282e-12
numerical: -11.989723 analytic: -11.989723, relative error: 1.120753e-11
numerical: 13.289541 analytic: 13.289541, relative error: 5.535768e-12
numerical: -5.847753 analytic: -5.847753, relative error: 3.157663e-11
numerical: 17.998841 analytic: 17.998841, relative error: 1.781505e-11
numerical: -10.503356 analytic: -10.503356, relative error: 7.359801e-12
numerical: -9.300132 analytic: -9.300132, relative error: 4.530295e-11
###Markdown
Inline Question 1:It is possible that once in a while a dimension in the gradcheck will not match exactly. What could such a discrepancy be caused by? Is it a reason for concern? What is a simple example in one dimension where a gradient check could fail? *Hint: the SVM loss function is not strictly speaking differentiable***Your Answer:** *There are two possible reasons. The first one lays in the area of numeric gradient computation, and as it is not exact, it could possibly give a small error. Second reason is that if value of the margin is exactly 0 for at least one dimension, loss function is not differentiable in this point.*
###Code
# Next implement the function svm_loss_vectorized; for now only compute the loss;
# we will implement the gradient in a moment.
tic = time.time()
loss_naive, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.linear_svm import svm_loss_vectorized
tic = time.time()
loss_vectorized, _ = svm_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# The losses should match but your vectorized implementation should be much faster.
print('difference: %f' % (loss_naive - loss_vectorized))
# Complete the implementation of svm_loss_vectorized, and compute the gradient
# of the loss function in a vectorized way.
# The naive implementation and the vectorized implementation should match, but
# the vectorized version should still be much faster.
tic = time.time()
_, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Naive loss and gradient: computed in %fs' % (toc - tic))
tic = time.time()
_, grad_vectorized = svm_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Vectorized loss and gradient: computed in %fs' % (toc - tic))
# The loss is a single number, so it is easy to compare the values computed
# by the two implementations. The gradient on the other hand is a matrix, so
# we use the Frobenius norm to compare them.
difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('difference: %f' % difference)
###Output
Naive loss and gradient: computed in 0.117825s
Vectorized loss and gradient: computed in 0.013008s
difference: 0.000000
###Markdown
Stochastic Gradient DescentWe now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss.
###Code
# In the file linear_classifier.py, implement SGD in the function
# LinearClassifier.train() and then run it with the code below.
from cs231n.classifiers import LinearSVM
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-7, reg=2.5e4,
num_iters=1500, verbose=True)
toc = time.time()
print('That took %fs' % (toc - tic))
# A useful debugging strategy is to plot the loss as a function of
# iteration number:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
# Write the LinearSVM.predict function and evaluate the performance on both the
# training and validation set
y_train_pred = svm.predict(X_train)
print('training accuracy: %f' % (np.mean(y_train == y_train_pred), ))
y_val_pred = svm.predict(X_val)
print('validation accuracy: %f' % (np.mean(y_val == y_val_pred), ))
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of about 0.4 on the validation set.
learning_rates = np.linspace(1e-7, 1e-6, 10)
regularization_strengths = np.linspace(1e4, 2e4, 10)
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.
################################################################################
# TODO: #
# Write code that chooses the best hyperparameters by tuning on the validation #
# set. For each combination of hyperparameters, train a linear SVM on the #
# training set, compute its accuracy on the training and validation sets, and #
# store these numbers in the results dictionary. In addition, store the best #
# validation accuracy in best_val and the LinearSVM object that achieves this #
# accuracy in best_svm. #
# #
# Hint: You should use a small value for num_iters as you develop your #
# validation code so that the SVMs don't take much time to train; once you are #
# confident that your validation code works, you should rerun the validation #
# code with a larger value for num_iters. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
print("=" * 10, "lr = {:.10f}, reg = {:.10f}".format(lr, reg), "="*10 )
svm = LinearSVM()
svm.train(X_train, y_train, learning_rate=lr, reg=reg, num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train)
y_val_pred = svm.predict(X_val)
training_accuracy = np.mean(y_train == y_train_pred)
validation_accuracy = np.mean(y_val == y_val_pred)
results[(lr, reg)] = (training_accuracy, validation_accuracy)
if validation_accuracy > best_val:
best_val = validation_accuracy
best_svm = svm
print("\ttrain acc = {} --- val acc = {}".format(*results[(lr, reg)]))
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.tight_layout()
plt.show()
# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('linear SVM on raw pixels final test set accuracy: %f' % test_accuracy)
# Visualize the learned weights for each class.
# Depending on your choice of learning rate and regularization strength, these may
# or may not be nice to look at.
w = best_svm.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____ |
AlphaFold2_complexes.ipynb | ###Markdown
AlphaFold2_complexes---------**UPDATE** (Aug. 13, 2021)This notebook is being retired and no longer updated. The functionality for complex prediction (including going beyond dimers) has been integrated in our [new advanced notebook](https://github.com/sokrypton/ColabFold/blob/main/beta/AlphaFold2_advanced.ipynb).---------Credit to Minkyung Baek @minkbaek and Yoshitaka Moriwaki @Ag_smith for initially showing protein-complex prediction works in alphafold2.- https://twitter.com/minkbaek/status/1417538291709071362- https://twitter.com/Ag_smith/status/1417063635000598528- [script](https://github.com/RosettaCommons/RoseTTAFold/blob/main/example/complex_modeling/make_joint_MSA_bacterial.py) from rosettafold for paired alignment generation**Instructions**- For *monomers* and *homo-oligomers*, see this [notebook](https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/AlphaFold2.ipynb).- For prokaryotic protein complexes (found in operons), we recommend using the `pair_msa` option.**Limitations**- This notebook does NOT use templates or amber relax at the end for refinement.- For a typical Google-Colab-GPU (16G) session, the max total length is **1400 residues**.
###Code
#@title Input protein sequences
import os
os.environ['TF_FORCE_UNIFIED_MEMORY'] = '1'
os.environ['XLA_PYTHON_CLIENT_MEM_FRACTION'] = '2.0'
from google.colab import files
import os.path
import re
import hashlib
def add_hash(x,y):
return x+"_"+hashlib.sha1(y.encode()).hexdigest()[:5]
query_sequence_a = 'AVLKIIQGALDTRELLKAYQEEACAKNFGAFCVFVGIVRKEDNIQGLSFDIYEALLKTWFEKWHHKAKDLGVVLKMAHSLGDVLIGQSSFLCVSMGKNRKNALELYENFIEDFKHNAPIWKYDLIHNKRIYAKERSHPLKGSGLLA' #@param {type:"string"}
query_sequence_a = "".join(query_sequence_a.split())
query_sequence_a = re.sub(r'[^A-Z]','', query_sequence_a.upper())
query_sequence_b = 'MMVEVRFFGPIKEENFFIKANDLKELRAILQEKEGLKEWLGVCAIALNDHLIDNLNTPLKDGDVISLLPPVCGG' #@param {type:"string"}
query_sequence_b = "".join(query_sequence_b.split())
query_sequence_b = re.sub(r'[^A-Z]','', query_sequence_b.upper())
# Using trick from @onoda_hiroki
# https://twitter.com/onoda_hiroki/status/1420068104239910915
# "U" indicates an "UNKNOWN" residue and it will not be modeled
# But we need linker of at least length 32
query_sequence_a = re.sub(r'U+',"U"*32,query_sequence_a)
query_sequence_b = re.sub(r'U+',"U"*32,query_sequence_b)
query_sequence = query_sequence_a + query_sequence_b
if len(query_sequence) > 1400:
print(f"WARNING: For a typical Google-Colab-GPU (16G) session, the max total length is 1400 residues. You are at {len(query_sequence)}!")
jobname = 'test' #@param {type:"string"}
jobname = "".join(jobname.split())
jobname = re.sub(r'\W+', '', jobname)
jobname = add_hash(jobname, query_sequence)
# number of models to use
#@markdown ---
#@markdown ### Advanced settings
num_models = 5 #@param [1,2,3,4,5] {type:"raw"}
msa_mode = "MMseqs2" #@param ["MMseqs2","single_sequence"]
use_msa = True if msa_mode == "MMseqs2" else False
pair_msa = False #@param {type:"boolean"}
disable_mmseqs2_filter = pair_msa
#@markdown ---
with open(f"{jobname}.log", "w") as text_file:
text_file.write("num_models=%s\n" % num_models)
text_file.write("use_msa=%s\n" % use_msa)
text_file.write("msa_mode=%s\n" % msa_mode)
text_file.write("pair_msa=%s\n" % pair_msa)
text_file.write("disable_mmseqs2_filter=%s\n" % disable_mmseqs2_filter)
#@title Install dependencies
%%bash -s $use_msa
USE_MSA=$1
if [ ! -f AF2_READY ]; then
# install dependencies
pip -q install biopython
pip -q install dm-haiku
pip -q install ml-collections
pip -q install py3Dmol
wget -qnc https://raw.githubusercontent.com/sokrypton/ColabFold/main/beta/colabfold.py
# download model
if [ ! -d "alphafold/" ]; then
git clone https://github.com/deepmind/alphafold.git --quiet
mv alphafold alphafold_
mv alphafold_/alphafold .
# remove "END" from PDBs, otherwise biopython complains
sed -i "s/pdb_lines.append('END')//" /content/alphafold/common/protein.py
sed -i "s/pdb_lines.append('ENDMDL')//" /content/alphafold/common/protein.py
fi
# download model params (~1 min)
if [ ! -d "params/" ]; then
wget -qnc https://storage.googleapis.com/alphafold/alphafold_params_2021-07-14.tar
mkdir params
tar -xf alphafold_params_2021-07-14.tar -C params/
rm alphafold_params_2021-07-14.tar
fi
touch AF2_READY
fi
#@title Import libraries
# setup the model
if "IMPORTED" not in dir():
import time
import requests
import tarfile
import sys
import numpy as np
import pickle
from string import ascii_uppercase
from alphafold.common import protein
from alphafold.data import pipeline
from alphafold.data import templates
from alphafold.model import data
from alphafold.model import config
from alphafold.model import model
from alphafold.data.tools import hhsearch
import colabfold as cf
# plotting libraries
import py3Dmol
import matplotlib.pyplot as plt
IMPORTED = True
def set_bfactor(pdb_filename, bfac, idx_res, chains):
I = open(pdb_filename,"r").readlines()
O = open(pdb_filename,"w")
for line in I:
if line[0:6] == "ATOM ":
seq_id = int(line[22:26].strip()) - 1
seq_id = np.where(idx_res == seq_id)[0][0]
O.write(f"{line[:21]}{chains[seq_id]}{line[22:60]}{bfac[seq_id]:6.2f}{line[66:]}")
O.close()
def predict_structure(prefix, feature_dict, Ls, random_seed=0, num_models=5):
"""Predicts structure using AlphaFold for the given sequence."""
# Minkyung's code
# add big enough number to residue index to indicate chain breaks
idx_res = feature_dict['residue_index']
L_prev = 0
# Ls: number of residues in each chain
for L_i in Ls[:-1]:
idx_res[L_prev+L_i:] += 200
L_prev += L_i
chains = list("".join([ascii_uppercase[n]*L for n,L in enumerate(Ls)]))
feature_dict['residue_index'] = idx_res
# Run the models.
plddts = []
paes = []
unrelaxed_pdb_lines = []
relaxed_pdb_lines = []
model_names = ["model_4","model_1","model_2","model_3","model_5"][:num_models]
for n,model_name in enumerate(model_names):
model_config = config.model_config(model_name+"_ptm")
model_config.data.eval.num_ensemble = 1
model_params = data.get_model_haiku_params(model_name+"_ptm", data_dir=".")
if model_name == "model_4":
model_runner = model.RunModel(model_config, model_params)
processed_feature_dict = model_runner.process_features(feature_dict,random_seed=0)
else:
# swap params
for k in model_runner.params.keys():
model_runner.params[k] = model_params[k]
print(f"running model_{n+1}")
prediction_result = model_runner.predict(processed_feature_dict)
# cleanup to save memory
if model_name == "model_5": del model_runner
del model_params
unrelaxed_protein = protein.from_prediction(processed_feature_dict,prediction_result)
unrelaxed_pdb_lines.append(protein.to_pdb(unrelaxed_protein))
plddts.append(prediction_result['plddt'])
paes.append(prediction_result['predicted_aligned_error'])
# Delete unused outputs to save memory.
del prediction_result
# rerank models based on predicted lddt
lddt_rank = np.mean(plddts,-1).argsort()[::-1]
plddts_ranked = {}
paes_ranked = {}
print("model\tplldt\tpae_ab")
L = Ls[0]
for n,r in enumerate(lddt_rank):
plddt = plddts[r].mean()
pae_ab = (paes[r][L:,:L].mean() + paes[r][:L,L:].mean()) / 2
print(f"model_{n+1}\t{plddt:.2f}\t{pae_ab:.2f}")
unrelaxed_pdb_path = f'{prefix}_unrelaxed_model_{n+1}.pdb'
with open(unrelaxed_pdb_path, 'w') as f:
f.write(unrelaxed_pdb_lines[r])
set_bfactor(unrelaxed_pdb_path, plddts[r], idx_res, chains)
plddts_ranked[f"model_{n+1}"] = plddts[r]
paes_ranked[f"model_{n+1}"] = paes[r]
return plddts_ranked, paes_ranked
# CODE FROM MINKYUNG/ROSETTAFOLD
def read_a3m(a3m_lines):
'''parse an a3m files as a dictionary {label->sequence}'''
seq = []
lab = []
is_first = True
for line in a3m_lines.splitlines():
if line[0] == '>':
label = line.rstrip().split()[0][1:]
is_incl = True
if is_first: # include first sequence (query)
is_first = False
lab.append(label)
continue
if "UniRef" in label:
code = label.split()[0].split('_')[-1]
if code.startswith("UPI"): # UniParc identifier -- exclude
is_incl = False
continue
elif label.startswith("tr|"):
code = label.split('|')[1]
else:
is_incl = False
continue
lab.append(code)
else:
if is_incl:
seq.append(line.rstrip())
else:
continue
return seq, lab
# https://www.uniprot.org/help/accession_numbers
def uni2idx(ids):
'''convert uniprot ids into integers according to the structure
of uniprot accession numbers'''
ids2 = [i.split("-")[0] for i in ids]
ids2 = [i+'AAA0' if len(i)==6 else i for i in ids2]
arr = np.array([list(s) for s in ids2], dtype='|S1').view(np.uint8)
for i in [1,5,9]:
arr[:,i] -= ord('0')
arr[arr>=ord('A')] -= ord('A')
arr[arr>=ord('0')] -= ord('0')-26
arr[:,0][arr[:,0]>ord('Q')-ord('A')] -= 3
arr = arr.astype(np.int64)
coef = np.array([23,10,26,36,36,10,26,36,36,1], dtype=np.int64)
coef = np.tile(coef[None,:],[len(ids),1])
c1 = [i for i,id_ in enumerate(ids) if id_[0] in 'OPQ' and len(id_)==6]
c2 = [i for i,id_ in enumerate(ids) if id_[0] not in 'OPQ' and len(id_)==6]
coef[c1] = np.array([3, 10,36,36,36,1,1,1,1,1])
coef[c2] = np.array([23,10,26,36,36,1,1,1,1,1])
for i in range(1,10):
coef[:,-i-1] *= coef[:,-i]
return np.sum(arr*coef,axis=-1)
def run_mmseqs2(query_sequence, prefix, use_env=True, filter=False):
def submit(query_sequence, mode):
res = requests.post('https://a3m.mmseqs.com/ticket/msa', data={'q':f">1\n{query_sequence}", 'mode': mode})
return res.json()
def status(ID):
res = requests.get(f'https://a3m.mmseqs.com/ticket/{ID}')
return res.json()
def download(ID, path):
res = requests.get(f'https://a3m.mmseqs.com/result/download/{ID}')
with open(path,"wb") as out: out.write(res.content)
if filter:
mode = "env" if use_env else "all"
else:
mode = "env-nofilter" if use_env else "nofilter"
path = f"{prefix}_{mode}"
if not os.path.isdir(path): os.mkdir(path)
# call mmseqs2 api
tar_gz_file = f'{path}/out.tar.gz'
if not os.path.isfile(tar_gz_file):
out = submit(query_sequence, mode)
while out["status"] in ["RUNNING","PENDING"]:
time.sleep(1)
out = status(out["id"])
download(out["id"], tar_gz_file)
# parse a3m files
a3m_lines = []
a3m = f"{prefix}_{mode}.a3m"
if not os.path.isfile(a3m):
with tarfile.open(tar_gz_file) as tar_gz: tar_gz.extractall(path)
a3m_files = [f"{path}/uniref.a3m"]
if use_env: a3m_files.append(f"{path}/bfd.mgnify30.metaeuk30.smag30.a3m")
a3m_out = open(a3m,"w")
for a3m_file in a3m_files:
for line in open(a3m_file,"r"):
line = line.replace("\x00","")
if len(line) > 0:
a3m_lines.append(line)
a3m_out.write(line)
else:
a3m_lines = open(a3m).readlines()
return "".join(a3m_lines), len(a3m_lines)
#@title Call MMseqs2 to get MSA for each gene
Ls = [len(query_sequence_a),len(query_sequence_b)]
msas = []
deletion_matrices = []
if use_msa:
os.makedirs('tmp', exist_ok=True)
prefix = hashlib.sha1(query_sequence.encode()).hexdigest()
prefix = os.path.join('tmp',prefix)
print(f"running mmseqs2 (use_env={True} filter={True})")
a3m_lines = cf.run_mmseqs2([query_sequence_a, query_sequence_b], prefix, use_env=True, filter=True)
if pair_msa:
a3m_lines.append([])
print(f"running mmseqs2 for pair_msa (use_env={False} filter={False})")
a3m_lines_pair = cf.run_mmseqs2([query_sequence_a, query_sequence_b], prefix, use_env=False, filter=False)
# CODE FROM MINKYUNG/ROSETTAFOLD
msa1, lab1 = read_a3m(a3m_lines_pair[0])
msa2, lab2 = read_a3m(a3m_lines_pair[1])
if len(lab1) > 1 and len(lab2) > 1:
# convert uniprot ids into integers
hash1 = uni2idx(lab1[1:])
hash2 = uni2idx(lab2[1:])
# find pairs of uniprot ids which are separated by at most 10
idx1, idx2 = np.where(np.abs(hash1[:,None]-hash2[None,:]) < 10)
if idx1.shape[0] > 0:
a3m_lines[2] = ['>query\n%s%s\n'%(msa1[0],msa2[0])]
for i,j in zip(idx1,idx2):
a3m_lines[2].append(">%s_%s\n%s%s\n"%(lab1[i+1],lab2[j+1],msa1[i+1],msa2[j+1]))
msa, deletion_matrix = pipeline.parsers.parse_a3m("".join(a3m_lines[2]))
msas.append(msa)
deletion_matrices.append(deletion_matrix)
print("pairs found:",len(msa))
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines[0])
msas.append([seq+"-"*Ls[1] for seq in msa])
deletion_matrices.append([mtx+[0]*Ls[1] for mtx in deletion_matrix])
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines[1])
msas.append(["-"*Ls[0]+seq for seq in msa])
deletion_matrices.append([[0]*Ls[0]+mtx for mtx in deletion_matrix])
else:
msas.append([query_sequence])
deletion_matrices.append([[0]*len(query_sequence)])
feature_dict = {
**pipeline.make_sequence_features(sequence=query_sequence,
description="none",
num_res=len(query_sequence)),
**pipeline.make_msa_features(msas=msas, deletion_matrices=deletion_matrices),
}
#@title Plot Number of Sequences per Position
dpi = 100#@param {type:"integer"}
# confidence per position
plt.figure(dpi=dpi)
plt.plot((feature_dict["msa"] != 21).sum(0))
plt.xlabel("positions")
plt.ylabel("number of sequences")
plt.savefig(jobname+"_msa_coverage.png")
plt.show()
#@title Predict structure
plddts, paes = predict_structure(jobname, feature_dict, Ls=Ls, num_models=num_models)
#@title Plot Predicted Alignment Error
dpi = 100#@param {type:"integer"}
# confidence per position
plt.figure(figsize=(3*num_models,2), dpi=dpi)
for n,(model_name,value) in enumerate(paes.items()):
plt.subplot(1,num_models,n+1)
plt.title(model_name)
plt.imshow(value,label=model_name,cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.savefig(jobname+"_PAE.png")
plt.show()
#@title Plot lDDT per residue
# confidence per position
dpi = 100#@param {type:"integer"}
plt.figure(dpi=dpi)
for model_name,value in plddts.items():
plt.plot(value,label=model_name)
plt.legend()
plt.ylim(0,100)
plt.ylabel("predicted lDDT")
plt.xlabel("positions")
plt.savefig(jobname+"_lDDT.png")
plt.show()
#@title Display 3D structure {run: "auto"}
model_num = 1 #@param ["1", "2", "3", "4", "5"] {type:"raw"}
color = "chain" #@param ["chain", "lDDT", "rainbow"]
show_sidechains = False #@param {type:"boolean"}
show_mainchains = False #@param {type:"boolean"}
def plot_plddt_legend():
thresh = ['plDDT:','Very low (<50)','Low (60)','OK (70)','Confident (80)','Very high (>90)']
plt.figure(figsize=(1,0.1),dpi=100)
########################################
for c in ["#FFFFFF","#FF0000","#FFFF00","#00FF00","#00FFFF","#0000FF"]:
plt.bar(0, 0, color=c)
plt.legend(thresh, frameon=False,
loc='center', ncol=6,
handletextpad=1,
columnspacing=1,
markerscale=0.5,)
plt.axis(False)
return plt
def plot_confidence(model_num=1):
model_name = f"model_{model_num}"
plt.figure(figsize=(10,3),dpi=100)
"""Plots the legend for plDDT."""
#########################################
plt.subplot(1,2,1); plt.title('Predicted lDDT')
plt.plot(plddts[model_name])
for x in [len(query_sequence_a)]:
plt.plot([x,x],[0,100],color="black")
plt.ylabel('plDDT')
plt.xlabel('position')
#########################################
plt.subplot(1,2,2);plt.title('Predicted Aligned Error')
plt.imshow(paes[model_name], cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.xlabel('Scored residue')
plt.ylabel('Aligned residue')
#########################################
return plt
def show_pdb(model_num=1, show_sidechains=False, show_mainchains=False, color="lDDT"):
model_name = f"model_{model_num}"
pdb_filename = f"{jobname}_unrelaxed_{model_name}.pdb"
view = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js',)
view.addModel(open(pdb_filename,'r').read(),'pdb')
if color == "lDDT":
view.setStyle({'cartoon': {'colorscheme': {'prop':'b','gradient': 'roygb','min':50,'max':90}}})
elif color == "rainbow":
view.setStyle({'cartoon': {'color':'spectrum'}})
elif color == "chain":
for n,chain,color in zip(range(2),list("ABCDEFGH"),
["lime","cyan","magenta","yellow","salmon","white","blue","orange"]):
view.setStyle({'chain':chain},{'cartoon': {'color':color}})
if show_sidechains:
BB = ['C','O','N']
view.addStyle({'and':[{'resn':["GLY","PRO"],'invert':True},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.addStyle({'and':[{'resn':"GLY"},{'atom':'CA'}]},
{'sphere':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.addStyle({'and':[{'resn':"PRO"},{'atom':['C','O'],'invert':True}]},
{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
if show_mainchains:
BB = ['C','O','N','CA']
view.addStyle({'atom':BB},{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.zoomTo()
return view
show_pdb(model_num,show_sidechains, show_mainchains, color).show()
if color == "lDDT": plot_plddt_legend().show()
plot_confidence(model_num).show()
#@title Package and download results
!zip -FSr $jobname".result.zip" $jobname".log" $jobname"_msa_coverage.png" $jobname"_"*"relaxed_model_"*".pdb" $jobname"_lDDT.png" $jobname"_PAE.png"
files.download(f"{jobname}.result.zip")
###Output
_____no_output_____
###Markdown
AlphaFold2_complexes---------**UPDATE** (Aug. 13, 2021)This notebook is being retired and no longer updated. The functionality for complex prediction (including going beyond dimers) has been integrated in our [new advanced notebook](https://github.com/sokrypton/ColabFold/blob/main/beta/AlphaFold2_advanced.ipynb).---------Credit to Minkyung Baek @minkbaek and Yoshitaka Moriwaki @Ag_smith for initially showing protein-complex prediction works in alphafold2.- https://twitter.com/minkbaek/status/1417538291709071362- https://twitter.com/Ag_smith/status/1417063635000598528- [script](https://github.com/RosettaCommons/RoseTTAFold/blob/main/example/complex_modeling/make_joint_MSA_bacterial.py) from rosettafold for paired alignment generation**Instructions**- For *monomers* and *homo-oligomers*, see this [notebook](https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/AlphaFold2.ipynb).- For prokaryotic protein complexes (found in operons), we recommend using the `pair_msa` option.**Limitations**- This notebook does NOT use templates or amber relax at the end for refinement.- For a typical Google-Colab-GPU (16G) session, the max total length is **1400 residues**.
###Code
#@title Input protein sequences
import os
os.environ['TF_FORCE_UNIFIED_MEMORY'] = '1'
os.environ['XLA_PYTHON_CLIENT_MEM_FRACTION'] = '2.0'
from google.colab import files
import os.path
import re
import hashlib
def add_hash(x,y):
return x+"_"+hashlib.sha1(y.encode()).hexdigest()[:5]
query_sequence_a = 'AVLKIIQGALDTRELLKAYQEEACAKNFGAFCVFVGIVRKEDNIQGLSFDIYEALLKTWFEKWHHKAKDLGVVLKMAHSLGDVLIGQSSFLCVSMGKNRKNALELYENFIEDFKHNAPIWKYDLIHNKRIYAKERSHPLKGSGLLA' #@param {type:"string"}
query_sequence_a = "".join(query_sequence_a.split())
query_sequence_a = re.sub(r'[^A-Z]','', query_sequence_a.upper())
query_sequence_b = 'MMVEVRFFGPIKEENFFIKANDLKELRAILQEKEGLKEWLGVCAIALNDHLIDNLNTPLKDGDVISLLPPVCGG' #@param {type:"string"}
query_sequence_b = "".join(query_sequence_b.split())
query_sequence_b = re.sub(r'[^A-Z]','', query_sequence_b.upper())
# Using trick from @onoda_hiroki
# https://twitter.com/onoda_hiroki/status/1420068104239910915
# "U" indicates an "UNKNOWN" residue and it will not be modeled
# But we need linker of at least length 32
query_sequence_a = re.sub(r'U+',"U"*32,query_sequence_a)
query_sequence_b = re.sub(r'U+',"U"*32,query_sequence_b)
query_sequence = query_sequence_a + query_sequence_b
if len(query_sequence) > 1400:
print(f"WARNING: For a typical Google-Colab-GPU (16G) session, the max total length is 1400 residues. You are at {len(query_sequence)}!")
jobname = 'test' #@param {type:"string"}
jobname = "".join(jobname.split())
jobname = re.sub(r'\W+', '', jobname)
jobname = add_hash(jobname, query_sequence)
# number of models to use
#@markdown ---
#@markdown ### Advanced settings
num_models = 5 #@param [1,2,3,4,5] {type:"raw"}
msa_mode = "MMseqs2" #@param ["MMseqs2","single_sequence"]
use_msa = True if msa_mode == "MMseqs2" else False
pair_msa = False #@param {type:"boolean"}
disable_mmseqs2_filter = pair_msa
#@markdown ---
with open(f"{jobname}.log", "w") as text_file:
text_file.write("num_models=%s\n" % num_models)
text_file.write("use_msa=%s\n" % use_msa)
text_file.write("msa_mode=%s\n" % msa_mode)
text_file.write("pair_msa=%s\n" % pair_msa)
text_file.write("disable_mmseqs2_filter=%s\n" % disable_mmseqs2_filter)
#@title Install dependencies
%%bash -s $use_msa
USE_MSA=$1
if [ ! -f AF2_READY ]; then
# install dependencies
pip -q install biopython
pip -q install dm-haiku
pip -q install ml-collections
pip -q install py3Dmol
# download model
if [ ! -d "alphafold/" ]; then
git clone https://github.com/deepmind/alphafold.git --quiet
mv alphafold alphafold_
mv alphafold_/alphafold .
# remove "END" from PDBs, otherwise biopython complains
sed -i "s/pdb_lines.append('END')//" /content/alphafold/common/protein.py
sed -i "s/pdb_lines.append('ENDMDL')//" /content/alphafold/common/protein.py
fi
# download model params (~1 min)
if [ ! -d "params/" ]; then
wget -qnc https://storage.googleapis.com/alphafold/alphafold_params_2021-07-14.tar
mkdir params
tar -xf alphafold_params_2021-07-14.tar -C params/
rm alphafold_params_2021-07-14.tar
fi
touch AF2_READY
fi
#@title Import libraries
# setup the model
if "IMPORTED" not in dir():
# hiding warning messages
#import warnings
#from absl import logging
#import tensorflow as tf
#warnings.filterwarnings('ignore')
#logging.set_verbosity("error")
#tf.get_logger().setLevel('ERROR')
import time
import requests
import tarfile
import sys
import numpy as np
import pickle
from string import ascii_uppercase
from alphafold.common import protein
from alphafold.data import pipeline
from alphafold.data import templates
from alphafold.model import data
from alphafold.model import config
from alphafold.model import model
from alphafold.data.tools import hhsearch
# plotting libraries
import py3Dmol
import matplotlib.pyplot as plt
IMPORTED = True
def set_bfactor(pdb_filename, bfac, idx_res, chains):
I = open(pdb_filename,"r").readlines()
O = open(pdb_filename,"w")
for line in I:
if line[0:6] == "ATOM ":
seq_id = int(line[22:26].strip()) - 1
seq_id = np.where(idx_res == seq_id)[0][0]
O.write(f"{line[:21]}{chains[seq_id]}{line[22:60]}{bfac[seq_id]:6.2f}{line[66:]}")
O.close()
def predict_structure(prefix, feature_dict, Ls, random_seed=0, num_models=5):
"""Predicts structure using AlphaFold for the given sequence."""
# Minkyung's code
# add big enough number to residue index to indicate chain breaks
idx_res = feature_dict['residue_index']
L_prev = 0
# Ls: number of residues in each chain
for L_i in Ls[:-1]:
idx_res[L_prev+L_i:] += 200
L_prev += L_i
chains = list("".join([ascii_uppercase[n]*L for n,L in enumerate(Ls)]))
feature_dict['residue_index'] = idx_res
# Run the models.
plddts = []
paes = []
unrelaxed_pdb_lines = []
relaxed_pdb_lines = []
model_names = ["model_4","model_1","model_2","model_3","model_5"][:num_models]
for n,model_name in enumerate(model_names):
model_config = config.model_config(model_name+"_ptm")
model_config.data.eval.num_ensemble = 1
model_params = data.get_model_haiku_params(model_name+"_ptm", data_dir=".")
if model_name == "model_4":
model_runner = model.RunModel(model_config, model_params)
processed_feature_dict = model_runner.process_features(feature_dict,random_seed=0)
else:
# swap params
for k in model_runner.params.keys():
model_runner.params[k] = model_params[k]
print(f"running model_{n+1}")
prediction_result = model_runner.predict(processed_feature_dict)
# cleanup to save memory
if model_name == "model_5": del model_runner
del model_params
unrelaxed_protein = protein.from_prediction(processed_feature_dict,prediction_result)
unrelaxed_pdb_lines.append(protein.to_pdb(unrelaxed_protein))
plddts.append(prediction_result['plddt'])
paes.append(prediction_result['predicted_aligned_error'])
# Delete unused outputs to save memory.
del prediction_result
# rerank models based on predicted lddt
lddt_rank = np.mean(plddts,-1).argsort()[::-1]
plddts_ranked = {}
paes_ranked = {}
print("model\tplldt\tpae_ab")
L = Ls[0]
for n,r in enumerate(lddt_rank):
plddt = plddts[r].mean()
pae_ab = (paes[r][L:,:L].mean() + paes[r][:L,L:].mean()) / 2
print(f"model_{n+1}\t{plddt:.2f}\t{pae_ab:.2f}")
unrelaxed_pdb_path = f'{prefix}_unrelaxed_model_{n+1}.pdb'
with open(unrelaxed_pdb_path, 'w') as f:
f.write(unrelaxed_pdb_lines[r])
set_bfactor(unrelaxed_pdb_path, plddts[r], idx_res, chains)
plddts_ranked[f"model_{n+1}"] = plddts[r]
paes_ranked[f"model_{n+1}"] = paes[r]
return plddts_ranked, paes_ranked
# CODE FROM MINKYUNG/ROSETTAFOLD
def read_a3m(a3m_lines):
'''parse an a3m files as a dictionary {label->sequence}'''
seq = []
lab = []
is_first = True
for line in a3m_lines.splitlines():
if line[0] == '>':
label = line.strip()[1:]
is_incl = True
if is_first: # include first sequence (query)
is_first = False
lab.append(label)
continue
if "UniRef" in label:
code = label.split()[0].split('_')[-1]
if code.startswith("UPI"): # UniParc identifier -- exclude
is_incl = False
continue
elif label.startswith("tr|"):
code = label.split('|')[1]
else:
is_incl = False
continue
lab.append(code)
else:
if is_incl:
seq.append(line.rstrip())
else:
continue
return seq, lab
# https://www.uniprot.org/help/accession_numbers
def uni2idx(ids):
'''convert uniprot ids into integers according to the structure
of uniprot accession numbers'''
ids2 = [i.split("-")[0] for i in ids]
ids2 = [i+'AAA0' if len(i)==6 else i for i in ids2]
arr = np.array([list(s) for s in ids2], dtype='|S1').view(np.uint8)
for i in [1,5,9]:
arr[:,i] -= ord('0')
arr[arr>=ord('A')] -= ord('A')
arr[arr>=ord('0')] -= ord('0')-26
arr[:,0][arr[:,0]>ord('Q')-ord('A')] -= 3
arr = arr.astype(np.int64)
coef = np.array([23,10,26,36,36,10,26,36,36,1], dtype=np.int64)
coef = np.tile(coef[None,:],[len(ids),1])
c1 = [i for i,id_ in enumerate(ids) if id_[0] in 'OPQ' and len(id_)==6]
c2 = [i for i,id_ in enumerate(ids) if id_[0] not in 'OPQ' and len(id_)==6]
coef[c1] = np.array([3, 10,36,36,36,1,1,1,1,1])
coef[c2] = np.array([23,10,26,36,36,1,1,1,1,1])
for i in range(1,10):
coef[:,-i-1] *= coef[:,-i]
return np.sum(arr*coef,axis=-1)
def run_mmseqs2(query_sequence, prefix, use_env=True, filter=False):
def submit(query_sequence, mode):
res = requests.post('https://a3m.mmseqs.com/ticket/msa', data={'q':f">1\n{query_sequence}", 'mode': mode})
return res.json()
def status(ID):
res = requests.get(f'https://a3m.mmseqs.com/ticket/{ID}')
return res.json()
def download(ID, path):
res = requests.get(f'https://a3m.mmseqs.com/result/download/{ID}')
with open(path,"wb") as out: out.write(res.content)
if filter:
mode = "env" if use_env else "all"
else:
mode = "env-nofilter" if use_env else "nofilter"
path = f"{prefix}_{mode}"
if not os.path.isdir(path): os.mkdir(path)
# call mmseqs2 api
tar_gz_file = f'{path}/out.tar.gz'
if not os.path.isfile(tar_gz_file):
out = submit(query_sequence, mode)
while out["status"] in ["RUNNING","PENDING"]:
time.sleep(1)
out = status(out["id"])
download(out["id"], tar_gz_file)
# parse a3m files
a3m_lines = []
a3m = f"{prefix}_{mode}.a3m"
if not os.path.isfile(a3m):
with tarfile.open(tar_gz_file) as tar_gz: tar_gz.extractall(path)
a3m_files = [f"{path}/uniref.a3m"]
if use_env: a3m_files.append(f"{path}/bfd.mgnify30.metaeuk30.smag30.a3m")
a3m_out = open(a3m,"w")
for a3m_file in a3m_files:
for line in open(a3m_file,"r"):
line = line.replace("\x00","")
if len(line) > 0:
a3m_lines.append(line)
a3m_out.write(line)
else:
a3m_lines = open(a3m).readlines()
return "".join(a3m_lines), len(a3m_lines)
#@title Call MMseqs2 to get MSA for each gene
Ls = [len(query_sequence_a),len(query_sequence_b)]
msas = []
deletion_matrices = []
if use_msa:
os.makedirs('tmp', exist_ok=True)
a3m_lines = {}
if pair_msa: a3m_lines_pair = {}
for c,sequence in zip(["a","b"],[query_sequence_a, query_sequence_b]):
prefix = hashlib.sha1(sequence.encode()).hexdigest()
prefix = os.path.join('tmp',prefix)
print(f"running mmseqs2 on query_{c} (use_env={True} filter={True})")
a3m_lines[c],num = run_mmseqs2(sequence, prefix, use_env=True, filter=True)
print(f"found {num} filtered sequences")
if pair_msa:
print(f"running mmseqs2 on query_{c} (use_env={False} filter={False})")
a3m_lines_pair[c],num = run_mmseqs2(sequence, prefix, use_env=False, filter=False)
print(f"found {num} unfiltered sequences")
if pair_msa:
# CODE FROM MINKYUNG/ROSETTAFOLD
msa1, lab1 = read_a3m(a3m_lines_pair["a"])
msa2, lab2 = read_a3m(a3m_lines_pair["b"])
if len(lab1) > 1 and len(lab2) > 1:
# convert uniprot ids into integers
hash1 = uni2idx(lab1[1:])
hash2 = uni2idx(lab2[1:])
# find pairs of uniprot ids which are separated by at most 10
idx1, idx2 = np.where(np.abs(hash1[:,None]-hash2[None,:]) < 10)
if idx1.shape[0] > 0:
a3m_lines["ab"] = ['>query\n%s%s\n'%(msa1[0],msa2[0])]
for i,j in zip(idx1,idx2):
a3m_lines["ab"].append(">%s_%s\n%s%s\n"%(lab1[i+1],lab2[j+1],msa1[i+1],msa2[j+1]))
msa, deletion_matrix = pipeline.parsers.parse_a3m("".join(a3m_lines["ab"]))
msas.append(msa)
deletion_matrices.append(deletion_matrix)
print("pairs found:",len(msa))
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines["a"])
msas.append([seq+"-"*Ls[1] for seq in msa])
deletion_matrices.append([mtx+[0]*Ls[1] for mtx in deletion_matrix])
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines["b"])
msas.append(["-"*Ls[0]+seq for seq in msa])
deletion_matrices.append([[0]*Ls[0]+mtx for mtx in deletion_matrix])
else:
msas.append([query_sequence])
deletion_matrices.append([[0]*len(query_sequence)])
feature_dict = {
**pipeline.make_sequence_features(sequence=query_sequence,
description="none",
num_res=len(query_sequence)),
**pipeline.make_msa_features(msas=msas, deletion_matrices=deletion_matrices),
}
#@title Plot Number of Sequences per Position
dpi = 100#@param {type:"integer"}
# confidence per position
plt.figure(dpi=dpi)
plt.plot((feature_dict["msa"] != 21).sum(0))
plt.xlabel("positions")
plt.ylabel("number of sequences")
plt.savefig(jobname+"_msa_coverage.png")
plt.show()
#@title Predict structure
plddts, paes = predict_structure(jobname, feature_dict, Ls=Ls, num_models=num_models)
#@title Plot Predicted Alignment Error
dpi = 100#@param {type:"integer"}
# confidence per position
plt.figure(figsize=(3*num_models,2), dpi=dpi)
for n,(model_name,value) in enumerate(paes.items()):
plt.subplot(1,num_models,n+1)
plt.title(model_name)
plt.imshow(value,label=model_name,cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.savefig(jobname+"_PAE.png")
plt.show()
#@title Plot lDDT per residue
# confidence per position
dpi = 100#@param {type:"integer"}
plt.figure(dpi=dpi)
for model_name,value in plddts.items():
plt.plot(value,label=model_name)
plt.legend()
plt.ylim(0,100)
plt.ylabel("predicted lDDT")
plt.xlabel("positions")
plt.savefig(jobname+"_lDDT.png")
plt.show()
#@title Display 3D structure {run: "auto"}
model_num = 1 #@param ["1", "2", "3", "4", "5"] {type:"raw"}
color = "chain" #@param ["chain", "lDDT", "rainbow"]
show_sidechains = False #@param {type:"boolean"}
show_mainchains = False #@param {type:"boolean"}
def plot_plddt_legend():
thresh = ['plDDT:','Very low (<50)','Low (60)','OK (70)','Confident (80)','Very high (>90)']
plt.figure(figsize=(1,0.1),dpi=100)
########################################
for c in ["#FFFFFF","#FF0000","#FFFF00","#00FF00","#00FFFF","#0000FF"]:
plt.bar(0, 0, color=c)
plt.legend(thresh, frameon=False,
loc='center', ncol=6,
handletextpad=1,
columnspacing=1,
markerscale=0.5,)
plt.axis(False)
return plt
def plot_confidence(model_num=1):
model_name = f"model_{model_num}"
plt.figure(figsize=(10,3),dpi=100)
"""Plots the legend for plDDT."""
#########################################
plt.subplot(1,2,1); plt.title('Predicted lDDT')
plt.plot(plddts[model_name])
for x in [len(query_sequence_a)]:
plt.plot([x,x],[0,100],color="black")
plt.ylabel('plDDT')
plt.xlabel('position')
#########################################
plt.subplot(1,2,2);plt.title('Predicted Aligned Error')
plt.imshow(paes[model_name], cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.xlabel('Scored residue')
plt.ylabel('Aligned residue')
#########################################
return plt
def show_pdb(model_num=1, show_sidechains=False, show_mainchains=False, color="lDDT"):
model_name = f"model_{model_num}"
pdb_filename = f"{jobname}_unrelaxed_{model_name}.pdb"
view = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js',)
view.addModel(open(pdb_filename,'r').read(),'pdb')
if color == "lDDT":
view.setStyle({'cartoon': {'colorscheme': {'prop':'b','gradient': 'roygb','min':50,'max':90}}})
elif color == "rainbow":
view.setStyle({'cartoon': {'color':'spectrum'}})
elif color == "chain":
for n,chain,color in zip(range(2),list("ABCDEFGH"),
["lime","cyan","magenta","yellow","salmon","white","blue","orange"]):
view.setStyle({'chain':chain},{'cartoon': {'color':color}})
if show_sidechains:
BB = ['C','O','N']
view.addStyle({'and':[{'resn':["GLY","PRO"],'invert':True},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.addStyle({'and':[{'resn':"GLY"},{'atom':'CA'}]},
{'sphere':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.addStyle({'and':[{'resn':"PRO"},{'atom':['C','O'],'invert':True}]},
{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
if show_mainchains:
BB = ['C','O','N','CA']
view.addStyle({'atom':BB},{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.zoomTo()
return view
show_pdb(model_num,show_sidechains, show_mainchains, color).show()
if color == "lDDT": plot_plddt_legend().show()
plot_confidence(model_num).show()
#@title Package and download results
!zip -FSr $jobname".result.zip" $jobname".log" $jobname"_msa_coverage.png" $jobname"_"*"relaxed_model_"*".pdb" $jobname"_lDDT.png" $jobname"_PAE.png"
files.download(f"{jobname}.result.zip")
###Output
_____no_output_____
###Markdown
AlphaFold2_complexes---------**UPDATE** (Aug. 13, 2021)This notebook is being retired and no longer updated. The functionality for complex prediction (including going beyond dimers) has been integrated in our [new advanced notebook](https://github.com/sokrypton/ColabFold/blob/main/beta/AlphaFold2_advanced.ipynb).---------Credit to Minkyung Baek @minkbaek and Yoshitaka Moriwaki @Ag_smith for initially showing protein-complex prediction works in alphafold2.- https://twitter.com/minkbaek/status/1417538291709071362- https://twitter.com/Ag_smith/status/1417063635000598528- [script](https://github.com/RosettaCommons/RoseTTAFold/blob/main/example/complex_modeling/make_joint_MSA_bacterial.py) from rosettafold for paired alignment generation**Instructions**- For *monomers* and *homo-oligomers*, see this [notebook](https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/AlphaFold2.ipynb).- For prokaryotic protein complexes (found in operons), we recommend using the `pair_msa` option.**Limitations**- This notebook does NOT use templates or amber relax at the end for refinement.- For a typical Google-Colab-GPU (16G) session, the max total length is **1400 residues**.
###Code
#@title Input protein sequences
import os
os.environ['TF_FORCE_UNIFIED_MEMORY'] = '1'
os.environ['XLA_PYTHON_CLIENT_MEM_FRACTION'] = '2.0'
from google.colab import files
import os.path
import re
import hashlib
def add_hash(x,y):
return x+"_"+hashlib.sha1(y.encode()).hexdigest()[:5]
query_sequence_a = 'AVLKIIQGALDTRELLKAYQEEACAKNFGAFCVFVGIVRKEDNIQGLSFDIYEALLKTWFEKWHHKAKDLGVVLKMAHSLGDVLIGQSSFLCVSMGKNRKNALELYENFIEDFKHNAPIWKYDLIHNKRIYAKERSHPLKGSGLLA' #@param {type:"string"}
query_sequence_a = "".join(query_sequence_a.split())
query_sequence_a = re.sub(r'[^A-Z]','', query_sequence_a.upper())
query_sequence_b = 'MMVEVRFFGPIKEENFFIKANDLKELRAILQEKEGLKEWLGVCAIALNDHLIDNLNTPLKDGDVISLLPPVCGG' #@param {type:"string"}
query_sequence_b = "".join(query_sequence_b.split())
query_sequence_b = re.sub(r'[^A-Z]','', query_sequence_b.upper())
# Using trick from @onoda_hiroki
# https://twitter.com/onoda_hiroki/status/1420068104239910915
# "U" indicates an "UNKNOWN" residue and it will not be modeled
# But we need linker of at least length 32
query_sequence_a = re.sub(r'U+',"U"*32,query_sequence_a)
query_sequence_b = re.sub(r'U+',"U"*32,query_sequence_b)
query_sequence = query_sequence_a + query_sequence_b
if len(query_sequence) > 1400:
print(f"WARNING: For a typical Google-Colab-GPU (16G) session, the max total length is 1400 residues. You are at {len(query_sequence)}!")
jobname = 'test' #@param {type:"string"}
jobname = "".join(jobname.split())
jobname = re.sub(r'\W+', '', jobname)
jobname = add_hash(jobname, query_sequence)
# number of models to use
#@markdown ---
#@markdown ### Advanced settings
num_models = 5 #@param [1,2,3,4,5] {type:"raw"}
msa_mode = "MMseqs2" #@param ["MMseqs2","single_sequence"]
use_msa = True if msa_mode == "MMseqs2" else False
pair_msa = False #@param {type:"boolean"}
disable_mmseqs2_filter = pair_msa
#@markdown ---
with open(f"{jobname}.log", "w") as text_file:
text_file.write("num_models=%s\n" % num_models)
text_file.write("use_msa=%s\n" % use_msa)
text_file.write("msa_mode=%s\n" % msa_mode)
text_file.write("pair_msa=%s\n" % pair_msa)
text_file.write("disable_mmseqs2_filter=%s\n" % disable_mmseqs2_filter)
#@title Install dependencies
%%bash -s $use_msa
USE_MSA=$1
if [ ! -f AF2_READY ]; then
# install dependencies
pip -q install biopython
pip -q install dm-haiku
pip -q install ml-collections
pip -q install py3Dmol
wget -qnc https://raw.githubusercontent.com/sokrypton/ColabFold/main/beta/colabfold.py
# download model
if [ ! -d "alphafold/" ]; then
git clone https://github.com/deepmind/alphafold.git --quiet
mv alphafold alphafold_
mv alphafold_/alphafold .
# remove "END" from PDBs, otherwise biopython complains
sed -i "s/pdb_lines.append('END')//" /content/alphafold/common/protein.py
sed -i "s/pdb_lines.append('ENDMDL')//" /content/alphafold/common/protein.py
fi
# download model params (~1 min)
if [ ! -d "params/" ]; then
wget -qnc https://storage.googleapis.com/alphafold/alphafold_params_2021-07-14.tar
mkdir params
tar -xf alphafold_params_2021-07-14.tar -C params/
rm alphafold_params_2021-07-14.tar
fi
touch AF2_READY
fi
#@title Import libraries
# setup the model
if "IMPORTED" not in dir():
import time
import requests
import tarfile
import sys
import numpy as np
import pickle
from string import ascii_uppercase
from alphafold.common import protein
from alphafold.data import pipeline
from alphafold.data import templates
from alphafold.model import data
from alphafold.model import config
from alphafold.model import model
from alphafold.data.tools import hhsearch
import colabfold as cf
# plotting libraries
import py3Dmol
import matplotlib.pyplot as plt
IMPORTED = True
def set_bfactor(pdb_filename, bfac, idx_res, chains):
I = open(pdb_filename,"r").readlines()
O = open(pdb_filename,"w")
for line in I:
if line[0:6] == "ATOM ":
seq_id = int(line[22:26].strip()) - 1
seq_id = np.where(idx_res == seq_id)[0][0]
O.write(f"{line[:21]}{chains[seq_id]}{line[22:60]}{bfac[seq_id]:6.2f}{line[66:]}")
O.close()
def predict_structure(prefix, feature_dict, Ls, random_seed=0, num_models=5):
"""Predicts structure using AlphaFold for the given sequence."""
# Minkyung's code
# add big enough number to residue index to indicate chain breaks
idx_res = feature_dict['residue_index']
L_prev = 0
# Ls: number of residues in each chain
for L_i in Ls[:-1]:
idx_res[L_prev+L_i:] += 200
L_prev += L_i
chains = list("".join([ascii_uppercase[n]*L for n,L in enumerate(Ls)]))
feature_dict['residue_index'] = idx_res
# Run the models.
plddts = []
paes = []
unrelaxed_pdb_lines = []
relaxed_pdb_lines = []
model_names = ["model_4","model_1","model_2","model_3","model_5"][:num_models]
for n,model_name in enumerate(model_names):
model_config = config.model_config(model_name+"_ptm")
model_config.data.eval.num_ensemble = 1
model_params = data.get_model_haiku_params(model_name+"_ptm", data_dir=".")
if model_name == "model_4":
model_runner = model.RunModel(model_config, model_params)
processed_feature_dict = model_runner.process_features(feature_dict,random_seed=0)
else:
# swap params
for k in model_runner.params.keys():
model_runner.params[k] = model_params[k]
print(f"running model_{n+1}")
prediction_result = model_runner.predict(processed_feature_dict)
# cleanup to save memory
if model_name == "model_5": del model_runner
del model_params
unrelaxed_protein = protein.from_prediction(processed_feature_dict,prediction_result)
unrelaxed_pdb_lines.append(protein.to_pdb(unrelaxed_protein))
plddts.append(prediction_result['plddt'])
paes.append(prediction_result['predicted_aligned_error'])
# Delete unused outputs to save memory.
del prediction_result
# rerank models based on predicted lddt
lddt_rank = np.mean(plddts,-1).argsort()[::-1]
plddts_ranked = {}
paes_ranked = {}
print("model\tplldt\tpae_ab")
L = Ls[0]
for n,r in enumerate(lddt_rank):
plddt = plddts[r].mean()
pae_ab = (paes[r][L:,:L].mean() + paes[r][:L,L:].mean()) / 2
print(f"model_{n+1}\t{plddt:.2f}\t{pae_ab:.2f}")
unrelaxed_pdb_path = f'{prefix}_unrelaxed_model_{n+1}.pdb'
with open(unrelaxed_pdb_path, 'w') as f:
f.write(unrelaxed_pdb_lines[r])
set_bfactor(unrelaxed_pdb_path, plddts[r], idx_res, chains)
plddts_ranked[f"model_{n+1}"] = plddts[r]
paes_ranked[f"model_{n+1}"] = paes[r]
return plddts_ranked, paes_ranked
# CODE FROM MINKYUNG/ROSETTAFOLD
def read_a3m(a3m_lines):
'''parse an a3m files as a dictionary {label->sequence}'''
seq = []
lab = []
is_first = True
for line in a3m_lines.splitlines():
if line[0] == '>':
label = line.rstrip().split()[0][1:]
is_incl = True
if is_first: # include first sequence (query)
is_first = False
lab.append(label)
continue
if "UniRef" in label:
code = label.split()[0].split('_')[-1]
if code.startswith("UPI"): # UniParc identifier -- exclude
is_incl = False
continue
elif label.startswith("tr|"):
code = label.split('|')[1]
else:
is_incl = False
continue
lab.append(code)
else:
if is_incl:
seq.append(line.rstrip())
else:
continue
return seq, lab
# https://www.uniprot.org/help/accession_numbers
def uni2idx(ids):
'''convert uniprot ids into integers according to the structure
of uniprot accession numbers'''
ids2 = [i.split("-")[0] for i in ids]
ids2 = [i+'AAA0' if len(i)==6 else i for i in ids2]
arr = np.array([list(s) for s in ids2], dtype='|S1').view(np.uint8)
for i in [1,5,9]:
arr[:,i] -= ord('0')
arr[arr>=ord('A')] -= ord('A')
arr[arr>=ord('0')] -= ord('0')-26
arr[:,0][arr[:,0]>ord('Q')-ord('A')] -= 3
arr = arr.astype(np.int64)
coef = np.array([23,10,26,36,36,10,26,36,36,1], dtype=np.int64)
coef = np.tile(coef[None,:],[len(ids),1])
c1 = [i for i,id_ in enumerate(ids) if id_[0] in 'OPQ' and len(id_)==6]
c2 = [i for i,id_ in enumerate(ids) if id_[0] not in 'OPQ' and len(id_)==6]
coef[c1] = np.array([3, 10,36,36,36,1,1,1,1,1])
coef[c2] = np.array([23,10,26,36,36,1,1,1,1,1])
for i in range(1,10):
coef[:,-i-1] *= coef[:,-i]
return np.sum(arr*coef,axis=-1)
def run_mmseqs2(query_sequence, prefix, use_env=True, filter=False):
def submit(query_sequence, mode):
res = requests.post('https://a3m.mmseqs.com/ticket/msa', data={'q':f">1\n{query_sequence}", 'mode': mode})
return res.json()
def status(ID):
res = requests.get(f'https://a3m.mmseqs.com/ticket/{ID}')
return res.json()
def download(ID, path):
res = requests.get(f'https://a3m.mmseqs.com/result/download/{ID}')
with open(path,"wb") as out: out.write(res.content)
if filter:
mode = "env" if use_env else "all"
else:
mode = "env-nofilter" if use_env else "nofilter"
path = f"{prefix}_{mode}"
if not os.path.isdir(path): os.mkdir(path)
# call mmseqs2 api
tar_gz_file = f'{path}/out.tar.gz'
if not os.path.isfile(tar_gz_file):
out = submit(query_sequence, mode)
while out["status"] in ["RUNNING","PENDING"]:
time.sleep(1)
out = status(out["id"])
download(out["id"], tar_gz_file)
# parse a3m files
a3m_lines = []
a3m = f"{prefix}_{mode}.a3m"
if not os.path.isfile(a3m):
with tarfile.open(tar_gz_file) as tar_gz: tar_gz.extractall(path)
a3m_files = [f"{path}/uniref.a3m"]
if use_env: a3m_files.append(f"{path}/bfd.mgnify30.metaeuk30.smag30.a3m")
a3m_out = open(a3m,"w")
for a3m_file in a3m_files:
for line in open(a3m_file,"r"):
line = line.replace("\x00","")
if len(line) > 0:
a3m_lines.append(line)
a3m_out.write(line)
else:
a3m_lines = open(a3m).readlines()
return "".join(a3m_lines), len(a3m_lines)
#@title Call MMseqs2 to get MSA for each gene
Ls = [len(query_sequence_a),len(query_sequence_b)]
msas = []
deletion_matrices = []
if use_msa:
os.makedirs('tmp', exist_ok=True)
prefix = hashlib.sha1(query_sequence.encode()).hexdigest()
prefix = os.path.join('tmp',prefix)
print(f"running mmseqs2 (use_env={True} filter={True})")
a3m_lines = cf.run_mmseqs2([query_sequence_a, query_sequence_b], prefix, use_env=True, filter=True)
if pair_msa:
a3m_lines.append([])
print(f"running mmseqs2 for pair_msa (use_env={False} filter={False})")
a3m_lines_pair = cf.run_mmseqs2([query_sequence_a, query_sequence_b], prefix, use_env=False, filter=False)
# CODE FROM MINKYUNG/ROSETTAFOLD
msa1, lab1 = read_a3m(a3m_lines_pair[0])
msa2, lab2 = read_a3m(a3m_lines_pair[1])
if len(lab1) > 1 and len(lab2) > 1:
# convert uniprot ids into integers
hash1 = uni2idx(lab1[1:])
hash2 = uni2idx(lab2[1:])
# find pairs of uniprot ids which are separated by at most 10
idx1, idx2 = np.where(np.abs(hash1[:,None]-hash2[None,:]) < 10)
if idx1.shape[0] > 0:
a3m_lines[2] = ['>query\n%s%s\n'%(msa1[0],msa2[0])]
for i,j in zip(idx1,idx2):
a3m_lines[2].append(">%s_%s\n%s%s\n"%(lab1[i+1],lab2[j+1],msa1[i+1],msa2[j+1]))
msa, deletion_matrix = pipeline.parsers.parse_a3m("".join(a3m_lines[2]))
msas.append(msa)
deletion_matrices.append(deletion_matrix)
print("pairs found:",len(msa))
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines[0])
msas.append([seq+"-"*Ls[1] for seq in msa])
deletion_matrices.append([mtx+[0]*Ls[1] for mtx in deletion_matrix])
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines[1])
msas.append(["-"*Ls[0]+seq for seq in msa])
deletion_matrices.append([[0]*Ls[0]+mtx for mtx in deletion_matrix])
else:
msas.append([query_sequence])
deletion_matrices.append([[0]*len(query_sequence)])
feature_dict = {
**pipeline.make_sequence_features(sequence=query_sequence,
description="none",
num_res=len(query_sequence)),
**pipeline.make_msa_features(msas=msas, deletion_matrices=deletion_matrices),
}
#@title Plot Number of Sequences per Position
dpi = 100#@param {type:"integer"}
# confidence per position
plt.figure(dpi=dpi)
plt.plot((feature_dict["msa"] != 21).sum(0))
plt.xlabel("positions")
plt.ylabel("number of sequences")
plt.savefig(jobname+"_msa_coverage.png")
plt.show()
#@title Predict structure
plddts, paes = predict_structure(jobname, feature_dict, Ls=Ls, num_models=num_models)
#@title Plot Predicted Alignment Error
dpi = 100#@param {type:"integer"}
# confidence per position
plt.figure(figsize=(3*num_models,2), dpi=dpi)
for n,(model_name,value) in enumerate(paes.items()):
plt.subplot(1,num_models,n+1)
plt.title(model_name)
plt.imshow(value,label=model_name,cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.savefig(jobname+"_PAE.png")
plt.show()
#@title Plot lDDT per residue
# confidence per position
dpi = 100#@param {type:"integer"}
plt.figure(dpi=dpi)
for model_name,value in plddts.items():
plt.plot(value,label=model_name)
plt.legend()
plt.ylim(0,100)
plt.ylabel("predicted lDDT")
plt.xlabel("positions")
plt.savefig(jobname+"_lDDT.png")
plt.show()
#@title Display 3D structure {run: "auto"}
model_num = 1 #@param ["1", "2", "3", "4", "5"] {type:"raw"}
color = "chain" #@param ["chain", "lDDT", "rainbow"]
show_sidechains = False #@param {type:"boolean"}
show_mainchains = False #@param {type:"boolean"}
def plot_plddt_legend():
thresh = ['plDDT:','Very low (<50)','Low (60)','OK (70)','Confident (80)','Very high (>90)']
plt.figure(figsize=(1,0.1),dpi=100)
########################################
for c in ["#FFFFFF","#FF0000","#FFFF00","#00FF00","#00FFFF","#0000FF"]:
plt.bar(0, 0, color=c)
plt.legend(thresh, frameon=False,
loc='center', ncol=6,
handletextpad=1,
columnspacing=1,
markerscale=0.5,)
plt.axis(False)
return plt
def plot_confidence(model_num=1):
model_name = f"model_{model_num}"
plt.figure(figsize=(10,3),dpi=100)
"""Plots the legend for plDDT."""
#########################################
plt.subplot(1,2,1); plt.title('Predicted lDDT')
plt.plot(plddts[model_name])
for x in [len(query_sequence_a)]:
plt.plot([x,x],[0,100],color="black")
plt.ylabel('plDDT')
plt.xlabel('position')
#########################################
plt.subplot(1,2,2);plt.title('Predicted Aligned Error')
plt.imshow(paes[model_name], cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.xlabel('Scored residue')
plt.ylabel('Aligned residue')
#########################################
return plt
def show_pdb(model_num=1, show_sidechains=False, show_mainchains=False, color="lDDT"):
model_name = f"model_{model_num}"
pdb_filename = f"{jobname}_unrelaxed_{model_name}.pdb"
view = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js',)
view.addModel(open(pdb_filename,'r').read(),'pdb')
if color == "lDDT":
view.setStyle({'cartoon': {'colorscheme': {'prop':'b','gradient': 'roygb','min':50,'max':90}}})
elif color == "rainbow":
view.setStyle({'cartoon': {'color':'spectrum'}})
elif color == "chain":
for n,chain,color in zip(range(2),list("ABCDEFGH"),
["lime","cyan","magenta","yellow","salmon","white","blue","orange"]):
view.setStyle({'chain':chain},{'cartoon': {'color':color}})
if show_sidechains:
BB = ['C','O','N']
view.addStyle({'and':[{'resn':["GLY","PRO"],'invert':True},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.addStyle({'and':[{'resn':"GLY"},{'atom':'CA'}]},
{'sphere':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.addStyle({'and':[{'resn':"PRO"},{'atom':['C','O'],'invert':True}]},
{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
if show_mainchains:
BB = ['C','O','N','CA']
view.addStyle({'atom':BB},{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.zoomTo()
return view
show_pdb(model_num,show_sidechains, show_mainchains, color).show()
if color == "lDDT": plot_plddt_legend().show()
plot_confidence(model_num).show()
#@title Package and download results
!zip -FSr $jobname".result.zip" $jobname".log" $jobname"_msa_coverage.png" $jobname"_"*"relaxed_model_"*".pdb" $jobname"_lDDT.png" $jobname"_PAE.png"
files.download(f"{jobname}.result.zip")
###Output
_____no_output_____
###Markdown
AlphaFold2_complexesCredit to Minkyung Baek @minkbaek and Yoshitaka Moriwaki @Ag_smith for initially showing protein-complex prediction works in alphafold2.- https://twitter.com/minkbaek/status/1417538291709071362- https://twitter.com/Ag_smith/status/1417063635000598528- [script](https://github.com/RosettaCommons/RoseTTAFold/blob/main/example/complex_modeling/make_joint_MSA_bacterial.py) from rosettafold for paired alignment generation**Instructions**- For *homodimer*, paste same sequence into `query_sequence_a` and `query_sequence_b`, make sure `pair_msa` is disabled.- For prokaryotic protein complexes (found in operons), we recommend using the `pair_msa` option with `disable_mmseqs2_filter`- For *monomer* and *higher-order homo-oligomers*, see this [notebook](https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/AlphaFold2.ipynb).**Limitations**- This notebook does not use templates or Amber refinement at the end.
###Code
#@title Input protein sequences
from google.colab import files
import os
import os.path
import re
query_sequence_a = 'AVLKIIQGALDTRELLKAYQEEACAKNFGAFCVFVGIVRKEDNIQGLSFDIYEALLKTWFEKWHHKAKDLGVVLKMAHSLGDVLIGQSSFLCVSMGKNRKNALELYENFIEDFKHNAPIWKYDLIHNKRIYAKERSHPLKGSGLLA' #@param {type:"string"}
query_sequence_a = "".join(query_sequence_a.split())
query_sequence_a = re.sub(r'[^a-zA-Z]','', query_sequence_a).upper()
query_sequence_b = 'MMVEVRFFGPIKEENFFIKANDLKELRAILQEKEGLKEWLGVCAIALNDHLIDNLNTPLKDGDVISLLPPVCGG' #@param {type:"string"}
query_sequence_b = "".join(query_sequence_b.split())
query_sequence_b = re.sub(r'[^a-zA-Z]','', query_sequence_b).upper()
query_sequence = query_sequence_a + query_sequence_b
jobname = '3RPF' #@param {type:"string"}
jobname = "".join(jobname.split())
jobname = re.sub(r'\W+', '', jobname)
with open(f"{jobname}_a.fasta", "w") as text_file:
text_file.write(">1\n%s" % query_sequence_a)
with open(f"{jobname}_b.fasta", "w") as text_file:
text_file.write(">1\n%s" % query_sequence_b)
# number of models to use
#@markdown ---
#@markdown ### Advanced settings
num_models = 5 #@param [1,2,3,4,5] {type:"raw"}
msa_mode = "MMseqs2" #@param ["MMseqs2","single_sequence"]
use_msa = True if msa_mode == "MMseqs2" else False
disable_mmseqs2_filter = False #@param {type:"boolean"}
pair_msa = False #@param {type:"boolean"}
#@markdown ---
with open(f"{jobname}.log", "w") as text_file:
text_file.write("num_models=%s\n" % num_models)
text_file.write("use_msa=%s\n" % use_msa)
text_file.write("msa_mode=%s\n" % msa_mode)
text_file.write("pair_msa=%s\n" % pair_msa)
text_file.write("disable_mmseqs2_filter=%s\n" % disable_mmseqs2_filter)
# decide which a3m to use
if use_msa:
if disable_mmseqs2_filter:
a3m_file_a = f"{jobname}_a.nofilter.a3m"
a3m_file_b = f"{jobname}_b.nofilter.a3m"
else:
a3m_file_a = f"{jobname}_a.a3m"
a3m_file_b = f"{jobname}_b.a3m"
#@title Install dependencies
%%bash -s $use_msa
USE_MSA=$1
if [ ! -f AF2_READY ]; then
# install dependencies
pip -q install biopython
pip -q install dm-haiku
pip -q install ml-collections
pip -q install py3Dmol
# download model
if [ ! -d "alphafold/" ]; then
git clone https://github.com/deepmind/alphafold.git --quiet
mv alphafold alphafold_
mv alphafold_/alphafold .
# remove "END" from PDBs, otherwise biopython complains
sed -i "s/pdb_lines.append('END')//" /content/alphafold/common/protein.py
sed -i "s/pdb_lines.append('ENDMDL')//" /content/alphafold/common/protein.py
fi
# download model params (~1 min)
if [ ! -d "params/" ]; then
wget -qnc https://storage.googleapis.com/alphafold/alphafold_params_2021-07-14.tar
mkdir params
tar -xf alphafold_params_2021-07-14.tar -C params/
rm alphafold_params_2021-07-14.tar
fi
touch AF2_READY
fi
# download libraries for interfacing with MMseqs2 API
if [ ${USE_MSA} == "True" ]; then
if [ ! -f MMSEQ2_READY ]; then
apt-get -qq -y update 2>&1 1>/dev/null
apt-get -qq -y install jq curl zlib1g gawk 2>&1 1>/dev/null
touch MMSEQ2_READY
fi
fi
#@title Call MMseqs2 to get MSA for each gene
%%bash -s $use_msa $jobname $disable_mmseqs2_filter
USE_MSA=$1
JOBNAME=$2
NOFILTER=$3
if [ ${USE_MSA} == "True" ]; then
for C in "a" "b"
do
if [ ${NOFILTER} == "True" ]; then
RESULT=${JOBNAME}_${C}.mmseqs2.nofilter.tar.gz
RESULT_A3M=${JOBNAME}_${C}.nofilter.a3m
else
RESULT=${JOBNAME}_${C}.mmseqs2.tar.gz
RESULT_A3M=${JOBNAME}_${C}.a3m
fi
if [ ! -f RESULT ]; then
# query MMseqs2 webserver
echo "submitting job"
if [ ${NOFILTER} == "True" ]; then
ID=$(curl -s -F q=@${JOBNAME}_${C}.fasta -F mode=nofilter https://a3m.mmseqs.com/ticket/msa | jq -r '.id')
else
ID=$(curl -s -F q=@${JOBNAME}_${C}.fasta -F mode=all https://a3m.mmseqs.com/ticket/msa | jq -r '.id')
fi
STATUS=$(curl -s https://a3m.mmseqs.com/ticket/${ID} | jq -r '.status')
while [ "${STATUS}" == "RUNNING" ]; do
STATUS=$(curl -s https://a3m.mmseqs.com/ticket/${ID} | jq -r '.status')
sleep 1
done
if [ "${STATUS}" == "COMPLETE" ]; then
curl -s https://a3m.mmseqs.com/result/download/${ID} > ${RESULT}
tar xzf ${RESULT}
tr -d '\000' < uniref.a3m > ${RESULT_A3M}
rm uniref.a3m
mv pdb70.m8 ${JOBNAME}_${C}.m8
else
echo "MMseqs2 server did not return a valid result."
cp ${JOBNAME}_${C}.fasta ${RESULT_A3M}
fi
fi
if [ ${USE_MSA} == "True" ]; then
echo "Found $(grep -c ">" ${RESULT_A3M}) sequences"
fi
done
fi
#@title Import libraries and setup model
# setup the model
if "model" not in dir():
# hiding warning messages
import warnings
from absl import logging
import os
import tensorflow as tf
warnings.filterwarnings('ignore')
logging.set_verbosity("error")
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
tf.get_logger().setLevel('ERROR')
import sys
import numpy as np
import pickle
from string import ascii_uppercase
from alphafold.common import protein
from alphafold.data import pipeline
from alphafold.data import templates
from alphafold.model import data
from alphafold.model import config
from alphafold.model import model
from alphafold.data.tools import hhsearch
# plotting libraries
import py3Dmol
import matplotlib.pyplot as plt
import ipywidgets
from ipywidgets import interact, fixed
def mk_mock_template(query_sequence):
# since alphafold's model requires a template input
# we create a blank example w/ zero input, confidence -1
ln = len(query_sequence)
output_templates_sequence = "-"*ln
output_confidence_scores = np.full(ln,-1)
templates_all_atom_positions = np.zeros((ln, templates.residue_constants.atom_type_num, 3))
templates_all_atom_masks = np.zeros((ln, templates.residue_constants.atom_type_num))
templates_aatype = templates.residue_constants.sequence_to_onehot(output_templates_sequence,
templates.residue_constants.HHBLITS_AA_TO_ID)
template_features = {'template_all_atom_positions': templates_all_atom_positions[None],
'template_all_atom_masks': templates_all_atom_masks[None],
'template_sequence': [f'none'.encode()],
'template_aatype': np.array(templates_aatype)[None],
'template_confidence_scores': output_confidence_scores[None],
'template_domain_names': [f'none'.encode()],
'template_release_date': [f'none'.encode()]}
return template_features
def set_bfactor(pdb_filename, bfac, idx_res, chains):
I = open(pdb_filename,"r").readlines()
O = open(pdb_filename,"w")
for line in I:
if line[0:6] == "ATOM ":
seq_id = int(line[22:26].strip()) - 1
seq_id = np.where(idx_res == seq_id)[0][0]
O.write(f"{line[:21]}{chains[seq_id]}{line[22:60]}{bfac[seq_id]:6.2f}{line[66:]}")
O.close()
def predict_structure(prefix, feature_dict, Ls, random_seed=0):
"""Predicts structure using AlphaFold for the given sequence."""
# Minkyung's code
# add big enough number to residue index to indicate chain breaks
idx_res = feature_dict['residue_index']
L_prev = 0
# Ls: number of residues in each chain
for L_i in Ls[:-1]:
idx_res[L_prev+L_i:] += 200
L_prev += L_i
chains = list("".join([ascii_uppercase[n]*L for n,L in enumerate(Ls)]))
feature_dict['residue_index'] = idx_res
# Run the models.
plddts = []
paes = []
unrelaxed_pdb_lines = []
relaxed_pdb_lines = []
for model_name, params in model_params.items():
print(f"running {model_name}")
# swap params to avoid recompiling
# note: models 1,2 have diff number of params compared to models 3,4,5
if any(str(m) in model_name for m in [1,2]): model_runner = model_runner_1
if any(str(m) in model_name for m in [3,4,5]): model_runner = model_runner_3
model_runner.params = params
processed_feature_dict = model_runner.process_features(feature_dict, random_seed=random_seed)
prediction_result = model_runner.predict(processed_feature_dict)
unrelaxed_protein = protein.from_prediction(processed_feature_dict,prediction_result)
unrelaxed_pdb_lines.append(protein.to_pdb(unrelaxed_protein))
plddts.append(prediction_result['plddt'])
paes.append(prediction_result['predicted_aligned_error'])
# rerank models based on predicted lddt
lddt_rank = np.mean(plddts,-1).argsort()[::-1]
plddts_ranked = {}
paes_ranked = {}
print("model\tplldt\tpae_ab")
L = Ls[0]
for n,r in enumerate(lddt_rank):
plddt = plddts[r].mean()
pae_ab = (paes[r][L:,:L].mean() + paes[r][:L,L:].mean()) / 2
print(f"model_{n+1}\t{plddt:.2f}\t{pae_ab:.2f}")
unrelaxed_pdb_path = f'{prefix}_unrelaxed_model_{n+1}.pdb'
with open(unrelaxed_pdb_path, 'w') as f:
f.write(unrelaxed_pdb_lines[r])
set_bfactor(unrelaxed_pdb_path, plddts[r]/100, idx_res, chains)
plddts_ranked[f"model_{n+1}"] = plddts[r]
paes_ranked[f"model_{n+1}"] = paes[r]
return plddts_ranked, paes_ranked
##################################
# CODE FROM MINKYUNG/ROSETTAFOLD
##################################
def read_a3m(fn):
'''parse an a3m files as a dictionary {label->sequence}'''
seq = []
lab = []
is_first = True
for line in open(fn, "r"):
if line[0] == '>':
label = line.strip()[1:]
is_incl = True
if is_first: # include first sequence (query)
is_first = False
lab.append(label)
continue
if "UniRef" in label:
code = label.split()[0].split('_')[-1]
if code.startswith("UPI"): # UniParc identifier -- exclude
is_incl = False
continue
elif label.startswith("tr|"):
code = label.split('|')[1]
else:
is_incl = False
continue
lab.append(code)
else:
if is_incl:
seq.append(line.rstrip())
else:
continue
return seq, lab
# https://www.uniprot.org/help/accession_numbers
def uni2idx(ids):
'''convert uniprot ids into integers according to the structure
of uniprot accession numbers'''
ids2 = [i.split("-")[0] for i in ids]
ids2 = [i+'AAA0' if len(i)==6 else i for i in ids2]
arr = np.array([list(s) for s in ids2], dtype='|S1').view(np.uint8)
for i in [1,5,9]:
arr[:,i] -= ord('0')
arr[arr>=ord('A')] -= ord('A')
arr[arr>=ord('0')] -= ord('0')-26
arr[:,0][arr[:,0]>ord('Q')-ord('A')] -= 3
arr = arr.astype(np.int64)
coef = np.array([23,10,26,36,36,10,26,36,36,1], dtype=np.int64)
coef = np.tile(coef[None,:],[len(ids),1])
c1 = [i for i,id_ in enumerate(ids) if id_[0] in 'OPQ' and len(id_)==6]
c2 = [i for i,id_ in enumerate(ids) if id_[0] not in 'OPQ' and len(id_)==6]
coef[c1] = np.array([3, 10,36,36,36,1,1,1,1,1])
coef[c2] = np.array([23,10,26,36,36,1,1,1,1,1])
for i in range(1,10):
coef[:,-i-1] *= coef[:,-i]
return np.sum(arr*coef,axis=-1)
##########################
# DATABASE
##########################
if "model_params" not in dir(): model_params = {}
for model_name in ["model_1_ptm","model_2_ptm","model_3_ptm","model_4_ptm","model_5_ptm"][:num_models]:
if model_name not in model_params:
model_config = config.model_config(model_name)
model_config.data.eval.num_ensemble = 1
model_params[model_name] = data.get_model_haiku_params(model_name=model_name, data_dir=".")
if model_name == "model_1_ptm":
model_runner_1 = model.RunModel(model_config, model_params[model_name])
if model_name == "model_3_ptm":
model_runner_3 = model.RunModel(model_config, model_params[model_name])
#@title Gather input features, predict structure
Ls = [len(query_sequence_a),len(query_sequence_b)]
msas = []
deletion_matrices = []
# parse MSA
if use_msa:
# MSA_AB
if pair_msa:
###########################################################################
# CODE FROM MINKYUNG/ROSETTAFOLD
###########################################################################
msa1, lab1 = read_a3m(a3m_file_a)
msa2, lab2 = read_a3m(a3m_file_b)
if len(lab1) > 1 and len(lab2) > 1:
# convert uniprot ids into integers
hash1 = uni2idx(lab1[1:])
hash2 = uni2idx(lab2[1:])
# find pairs of uniprot ids which are separated by at most 10
idx1, idx2 = np.where(np.abs(hash1[:,None]-hash2[None,:]) < 10)
if idx1.shape[0] > 0:
a3m_lines = ['>query\n%s%s\n'%(msa1[0],msa2[0])]
for i,j in zip(idx1,idx2):
a3m_lines.append(">%s_%s\n%s%s\n"%(lab1[i+1],lab2[j+1],msa1[i+1],msa2[j+1]))
msa, deletion_matrix = pipeline.parsers.parse_a3m("".join(a3m_lines))
msas.append(msa)
deletion_matrices.append(deletion_matrix)
print("pairs found:",len(msa))
# MSA_A
a3m_lines = "".join(open(a3m_file_a,"r").readlines())
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines)
msas.append([seq+"-"*Ls[1] for seq in msa])
deletion_matrices.append([mtx+[0]*Ls[1] for mtx in deletion_matrix])
# MSA_B
a3m_lines = "".join(open(a3m_file_b,"r").readlines())
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines)
msas.append(["-"*Ls[0]+seq for seq in msa])
deletion_matrices.append([[0]*Ls[0]+mtx for mtx in deletion_matrix])
else:
msas.append([query_sequence])
deletion_matrices.append([[0]*len(query_sequence)])
# gather features
feature_dict = {
**pipeline.make_sequence_features(sequence=query_sequence,
description="none",
num_res=len(query_sequence)),
**pipeline.make_msa_features(msas=msas, deletion_matrices=deletion_matrices),
**mk_mock_template(query_sequence)
}
plddts, paes = predict_structure(jobname, feature_dict, Ls=Ls)
#@title Plot Number of Sequences per Position
# confidence per position
plt.figure(dpi=100)
plt.plot((feature_dict["msa"] != 21).sum(0))
plt.xlabel("positions")
plt.ylabel("number of sequences")
plt.savefig(jobname+"_msa_coverage.png")
plt.show()
#@title Plot Predicted Alignment Error
# confidence per position
plt.figure(figsize=(3*num_models,2), dpi=100)
for n,(model_name,value) in enumerate(paes.items()):
plt.subplot(1,num_models,n+1)
plt.title(model_name)
plt.imshow(value,label=model_name,cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.savefig(jobname+"_PAE.png")
plt.show()
#@title Plot lDDT per residue
# confidence per position
plt.figure(dpi=100)
for model_name,value in plddts.items():
plt.plot(value,label=model_name)
plt.legend()
plt.ylim(0,100)
plt.ylabel("predicted lDDT")
plt.xlabel("positions")
plt.savefig(jobname+"_lDDT.png")
plt.show()
#@title Show 3D structure
def show_pdb(model_name,
show_sidechains=False,
show_mainchain=False,
color="chain"):
def mainchain(p, color="white", model=0):
BB = ['C','O','N','CA']
p.addStyle({"model":model,'atom':BB},
{'stick':{'colorscheme':f"{color}Carbon",'radius':0.4}})
def sidechain(p, model=0):
HP = ["ALA","GLY","VAL","ILE","LEU","PHE","MET","PRO","TRP","CYS","TYR"]
BB = ['C','O','N']
p.addStyle({"model":model,'and':[{'resn':HP},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':"yellowCarbon",'radius':0.4}})
p.addStyle({"model":model,'and':[{'resn':"GLY"},{'atom':'CA'}]},
{'sphere':{'colorscheme':"yellowCarbon",'radius':0.4}})
p.addStyle({"model":model,'and':[{'resn':"PRO"},{'atom':['C','O'],'invert':True}]},
{'stick':{'colorscheme':"yellowCarbon",'radius':0.4}})
p.addStyle({"model":model,'and':[{'resn':HP,'invert':True},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.4}})
pdb_filename = f"{jobname}_unrelaxed_{model_name}.pdb"
p = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js')
p.addModel(open(pdb_filename,'r').read(),'pdb')
if color == "lDDT":
p.setStyle({'cartoon': {'colorscheme': {'prop':'b','gradient': 'roygb','min':0,'max':1}}})
elif color == "rainbow":
p.setStyle({'cartoon': {'color':'spectrum'}})
else:
p.setStyle({'chain':"A"},{'cartoon': {'color':'lime'}})
p.setStyle({'chain':"B"},{'cartoon': {'color':'cyan'}})
if show_sidechains: sidechain(p)
if show_mainchain: mainchain(p)
p.zoomTo()
return p.show()
interact(show_pdb,
model_name=ipywidgets.Dropdown(options=plddts.keys(), value='model_1'),
show_sidechains=ipywidgets.Checkbox(value=False),
show_mainchain=ipywidgets.Checkbox(value=False),
color=ipywidgets.Dropdown(options=['chain', 'rainbow', 'lDDT'], value='chain'))
#@title Package and download results
!zip -FSr $jobname".result.zip" $jobname".log" $jobname"_msa_coverage.png" $jobname"_"*"relaxed_model_"*".pdb" $jobname"_lDDT.png" $jobname"_PAE.png"
files.download(f"{jobname}.result.zip")
###Output
adding: 3RPF_PAE.png (deflated 1%)
###Markdown
AlphaFold2_complexes---------**UPDATE** (Aug. 13, 2021)This notebook is being retired and no longer updated. The functionality for complex prediction (including going beyond dimers) has been integrated in our [new advanced notebook](https://github.com/sokrypton/ColabFold/blob/main/beta/AlphaFold2_advanced.ipynb).---------Credit to Minkyung Baek @minkbaek and Yoshitaka Moriwaki @Ag_smith for initially showing protein-complex prediction works in alphafold2.- https://twitter.com/minkbaek/status/1417538291709071362- https://twitter.com/Ag_smith/status/1417063635000598528- [script](https://github.com/RosettaCommons/RoseTTAFold/blob/main/example/complex_modeling/make_joint_MSA_bacterial.py) from rosettafold for paired alignment generation**Instructions**- For *monomers* and *homo-oligomers*, see this [notebook](https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/AlphaFold2.ipynb).- For prokaryotic protein complexes (found in operons), we recommend using the `pair_msa` option.**Limitations**- This notebook does NOT use templates or amber relax at the end for refinement.- For a typical Google-Colab-GPU (16G) session, the max total length is **1400 residues**.
###Code
#@title Input protein sequences
import os
os.environ['TF_FORCE_UNIFIED_MEMORY'] = '1'
os.environ['XLA_PYTHON_CLIENT_MEM_FRACTION'] = '2.0'
from google.colab import files
import os.path
import re
import hashlib
def add_hash(x,y):
return x+"_"+hashlib.sha1(y.encode()).hexdigest()[:5]
query_sequence_a = 'AVLKIIQGALDTRELLKAYQEEACAKNFGAFCVFVGIVRKEDNIQGLSFDIYEALLKTWFEKWHHKAKDLGVVLKMAHSLGDVLIGQSSFLCVSMGKNRKNALELYENFIEDFKHNAPIWKYDLIHNKRIYAKERSHPLKGSGLLA' #@param {type:"string"}
query_sequence_a = "".join(query_sequence_a.split())
query_sequence_a = re.sub(r'[^A-Z]','', query_sequence_a.upper())
query_sequence_b = 'MMVEVRFFGPIKEENFFIKANDLKELRAILQEKEGLKEWLGVCAIALNDHLIDNLNTPLKDGDVISLLPPVCGG' #@param {type:"string"}
query_sequence_b = "".join(query_sequence_b.split())
query_sequence_b = re.sub(r'[^A-Z]','', query_sequence_b.upper())
# Using trick from @onoda_hiroki
# https://twitter.com/onoda_hiroki/status/1420068104239910915
# "U" indicates an "UNKNOWN" residue and it will not be modeled
# But we need linker of at least length 32
query_sequence_a = re.sub(r'U+',"U"*32,query_sequence_a)
query_sequence_b = re.sub(r'U+',"U"*32,query_sequence_b)
query_sequence = query_sequence_a + query_sequence_b
if len(query_sequence) > 1400:
print(f"WARNING: For a typical Google-Colab-GPU (16G) session, the max total length is 1400 residues. You are at {len(query_sequence)}!")
jobname = 'test' #@param {type:"string"}
jobname = "".join(jobname.split())
jobname = re.sub(r'\W+', '', jobname)
jobname = add_hash(jobname, query_sequence)
# number of models to use
#@markdown ---
#@markdown ### Advanced settings
num_models = 5 #@param [1,2,3,4,5] {type:"raw"}
msa_mode = "MMseqs2" #@param ["MMseqs2","single_sequence"]
use_msa = True if msa_mode == "MMseqs2" else False
pair_msa = False #@param {type:"boolean"}
disable_mmseqs2_filter = pair_msa
#@markdown ---
with open(f"{jobname}.log", "w") as text_file:
text_file.write("num_models=%s\n" % num_models)
text_file.write("use_msa=%s\n" % use_msa)
text_file.write("msa_mode=%s\n" % msa_mode)
text_file.write("pair_msa=%s\n" % pair_msa)
text_file.write("disable_mmseqs2_filter=%s\n" % disable_mmseqs2_filter)
#@title Install dependencies
%%bash -s $use_msa
USE_MSA=$1
if [ ! -f AF2_READY ]; then
# install dependencies
pip -q install biopython
pip -q install dm-haiku
pip -q install ml-collections
pip -q install py3Dmol
wget -qnc https://raw.githubusercontent.com/sokrypton/ColabFold/main/beta/colabfold.py
# download model
if [ ! -d "alphafold/" ]; then
git clone https://github.com/deepmind/alphafold.git --quiet
mv alphafold alphafold_
mv alphafold_/alphafold .
# remove "END" from PDBs, otherwise biopython complains
sed -i "s/pdb_lines.append('END')//" /content/alphafold/common/protein.py
sed -i "s/pdb_lines.append('ENDMDL')//" /content/alphafold/common/protein.py
fi
# download model params (~1 min)
if [ ! -d "params/" ]; then
wget -qnc https://storage.googleapis.com/alphafold/alphafold_params_2021-07-14.tar
mkdir params
tar -xf alphafold_params_2021-07-14.tar -C params/
rm alphafold_params_2021-07-14.tar
fi
touch AF2_READY
fi
#@title Import libraries
# setup the model
if "IMPORTED" not in dir():
import time
import requests
import tarfile
import sys
import numpy as np
import pickle
from string import ascii_uppercase
from alphafold.common import protein
from alphafold.data import pipeline
from alphafold.data import templates
from alphafold.model import data
from alphafold.model import config
from alphafold.model import model
from alphafold.data.tools import hhsearch
import colabfold as cf
# plotting libraries
import py3Dmol
import matplotlib.pyplot as plt
IMPORTED = True
def set_bfactor(pdb_filename, bfac, idx_res, chains):
I = open(pdb_filename,"r").readlines()
O = open(pdb_filename,"w")
for line in I:
if line[0:6] == "ATOM ":
seq_id = int(line[22:26].strip()) - 1
seq_id = np.where(idx_res == seq_id)[0][0]
O.write(f"{line[:21]}{chains[seq_id]}{line[22:60]}{bfac[seq_id]:6.2f}{line[66:]}")
O.close()
def predict_structure(prefix, feature_dict, Ls, random_seed=0, num_models=5):
"""Predicts structure using AlphaFold for the given sequence."""
# Minkyung's code
# add big enough number to residue index to indicate chain breaks
idx_res = feature_dict['residue_index']
L_prev = 0
# Ls: number of residues in each chain
for L_i in Ls[:-1]:
idx_res[L_prev+L_i:] += 200
L_prev += L_i
chains = list("".join([ascii_uppercase[n]*L for n,L in enumerate(Ls)]))
feature_dict['residue_index'] = idx_res
# Run the models.
plddts = []
paes = []
unrelaxed_pdb_lines = []
relaxed_pdb_lines = []
model_names = ["model_4","model_1","model_2","model_3","model_5"][:num_models]
for n,model_name in enumerate(model_names):
model_config = config.model_config(model_name+"_ptm")
model_config.data.eval.num_ensemble = 1
model_params = data.get_model_haiku_params(model_name+"_ptm", data_dir=".")
if model_name == "model_4":
model_runner = model.RunModel(model_config, model_params)
processed_feature_dict = model_runner.process_features(feature_dict,random_seed=0)
else:
# swap params
for k in model_runner.params.keys():
model_runner.params[k] = model_params[k]
print(f"running model_{n+1}")
prediction_result = model_runner.predict(processed_feature_dict)
# cleanup to save memory
if model_name == "model_5": del model_runner
del model_params
unrelaxed_protein = protein.from_prediction(processed_feature_dict,prediction_result)
unrelaxed_pdb_lines.append(protein.to_pdb(unrelaxed_protein))
plddts.append(prediction_result['plddt'])
paes.append(prediction_result['predicted_aligned_error'])
# Delete unused outputs to save memory.
del prediction_result
# rerank models based on predicted lddt
lddt_rank = np.mean(plddts,-1).argsort()[::-1]
plddts_ranked = {}
paes_ranked = {}
print("model\tplldt\tpae_ab")
L = Ls[0]
for n,r in enumerate(lddt_rank):
plddt = plddts[r].mean()
pae_ab = (paes[r][L:,:L].mean() + paes[r][:L,L:].mean()) / 2
print(f"model_{n+1}\t{plddt:.2f}\t{pae_ab:.2f}")
unrelaxed_pdb_path = f'{prefix}_unrelaxed_model_{n+1}.pdb'
with open(unrelaxed_pdb_path, 'w') as f:
f.write(unrelaxed_pdb_lines[r])
set_bfactor(unrelaxed_pdb_path, plddts[r], idx_res, chains)
plddts_ranked[f"model_{n+1}"] = plddts[r]
paes_ranked[f"model_{n+1}"] = paes[r]
return plddts_ranked, paes_ranked
# CODE FROM MINKYUNG/ROSETTAFOLD
def read_a3m(a3m_lines):
'''parse an a3m files as a dictionary {label->sequence}'''
seq = []
lab = []
is_first = True
for line in a3m_lines.splitlines():
if line[0] == '>':
label = line.rstrip().split()[0][1:]
is_incl = True
if is_first: # include first sequence (query)
is_first = False
lab.append(label)
continue
if "UniRef" in label:
code = label.split()[0].split('_')[-1]
if code.startswith("UPI"): # UniParc identifier -- exclude
is_incl = False
continue
elif label.startswith("tr|"):
code = label.split('|')[1]
else:
is_incl = False
continue
lab.append(code)
else:
if is_incl:
seq.append(line.rstrip())
else:
continue
return seq, lab
# https://www.uniprot.org/help/accession_numbers
def uni2idx(ids):
'''convert uniprot ids into integers according to the structure
of uniprot accession numbers'''
ids2 = [i.split("-")[0] for i in ids]
ids2 = [i+'AAA0' if len(i)==6 else i for i in ids2]
arr = np.array([list(s) for s in ids2], dtype='|S1').view(np.uint8)
for i in [1,5,9]:
arr[:,i] -= ord('0')
arr[arr>=ord('A')] -= ord('A')
arr[arr>=ord('0')] -= ord('0')-26
arr[:,0][arr[:,0]>ord('Q')-ord('A')] -= 3
arr = arr.astype(np.int64)
coef = np.array([23,10,26,36,36,10,26,36,36,1], dtype=np.int64)
coef = np.tile(coef[None,:],[len(ids),1])
c1 = [i for i,id_ in enumerate(ids) if id_[0] in 'OPQ' and len(id_)==6]
c2 = [i for i,id_ in enumerate(ids) if id_[0] not in 'OPQ' and len(id_)==6]
coef[c1] = np.array([3, 10,36,36,36,1,1,1,1,1])
coef[c2] = np.array([23,10,26,36,36,1,1,1,1,1])
for i in range(1,10):
coef[:,-i-1] *= coef[:,-i]
return np.sum(arr*coef,axis=-1)
def run_mmseqs2(query_sequence, prefix, use_env=True, filter=False):
def submit(query_sequence, mode):
res = requests.post('https://a3m.mmseqs.com/ticket/msa', data={'q':f">1\n{query_sequence}", 'mode': mode})
return res.json()
def status(ID):
res = requests.get(f'https://a3m.mmseqs.com/ticket/{ID}')
return res.json()
def download(ID, path):
res = requests.get(f'https://a3m.mmseqs.com/result/download/{ID}')
with open(path,"wb") as out: out.write(res.content)
if filter:
mode = "env" if use_env else "all"
else:
mode = "env-nofilter" if use_env else "nofilter"
path = f"{prefix}_{mode}"
if not os.path.isdir(path): os.mkdir(path)
# call mmseqs2 api
tar_gz_file = f'{path}/out.tar.gz'
if not os.path.isfile(tar_gz_file):
out = submit(query_sequence, mode)
while out["status"] in ["RUNNING","PENDING"]:
time.sleep(1)
out = status(out["id"])
download(out["id"], tar_gz_file)
# parse a3m files
a3m_lines = []
a3m = f"{prefix}_{mode}.a3m"
if not os.path.isfile(a3m):
with tarfile.open(tar_gz_file) as tar_gz: tar_gz.extractall(path)
a3m_files = [f"{path}/uniref.a3m"]
if use_env: a3m_files.append(f"{path}/bfd.mgnify30.metaeuk30.smag30.a3m")
a3m_out = open(a3m,"w")
for a3m_file in a3m_files:
for line in open(a3m_file,"r"):
line = line.replace("\x00","")
if len(line) > 0:
a3m_lines.append(line)
a3m_out.write(line)
else:
a3m_lines = open(a3m).readlines()
return "".join(a3m_lines), len(a3m_lines)
#@title Call MMseqs2 to get MSA for each gene
Ls = [len(query_sequence_a),len(query_sequence_b)]
msas = []
deletion_matrices = []
if use_msa:
os.makedirs('tmp', exist_ok=True)
prefix = hashlib.sha1(query_sequence.encode()).hexdigest()
prefix = os.path.join('tmp',prefix)
print(f"running mmseqs2 (use_env={True} filter={True})")
a3m_lines = cf.run_mmseqs2([query_sequence_a, query_sequence_b], prefix, use_env=True, filter=True)
if pair_msa:
a3m_lines.append([])
print(f"running mmseqs2 for pair_msa (use_env={False} filter={False})")
a3m_lines_pair = cf.run_mmseqs2([query_sequence_a, query_sequence_b], prefix, use_env=False, filter=False)
# CODE FROM MINKYUNG/ROSETTAFOLD
msa1, lab1 = read_a3m(a3m_lines_pair[0])
msa2, lab2 = read_a3m(a3m_lines_pair[1])
if len(lab1) > 1 and len(lab2) > 1:
# convert uniprot ids into integers
hash1 = uni2idx(lab1[1:])
hash2 = uni2idx(lab2[1:])
# find pairs of uniprot ids which are separated by at most 10
idx1, idx2 = np.where(np.abs(hash1[:,None]-hash2[None,:]) < 10)
if idx1.shape[0] > 0:
a3m_lines[2] = ['>query\n%s%s\n'%(msa1[0],msa2[0])]
for i,j in zip(idx1,idx2):
a3m_lines[2].append(">%s_%s\n%s%s\n"%(lab1[i+1],lab2[j+1],msa1[i+1],msa2[j+1]))
msa, deletion_matrix = pipeline.parsers.parse_a3m("".join(a3m_lines[2]))
msas.append(msa)
deletion_matrices.append(deletion_matrix)
print("pairs found:",len(msa))
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines[0])
msas.append([seq+"-"*Ls[1] for seq in msa])
deletion_matrices.append([mtx+[0]*Ls[1] for mtx in deletion_matrix])
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines[1])
msas.append(["-"*Ls[0]+seq for seq in msa])
deletion_matrices.append([[0]*Ls[0]+mtx for mtx in deletion_matrix])
else:
msas.append([query_sequence])
deletion_matrices.append([[0]*len(query_sequence)])
feature_dict = {
**pipeline.make_sequence_features(sequence=query_sequence,
description="none",
num_res=len(query_sequence)),
**pipeline.make_msa_features(msas=msas, deletion_matrices=deletion_matrices),
}
#@title Plot Number of Sequences per Position
dpi = 100#@param {type:"integer"}
# confidence per position
plt.figure(dpi=dpi)
plt.plot((feature_dict["msa"] != 21).sum(0))
plt.xlabel("positions")
plt.ylabel("number of sequences")
plt.savefig(jobname+"_msa_coverage.png")
plt.show()
#@title Predict structure
plddts, paes = predict_structure(jobname, feature_dict, Ls=Ls, num_models=num_models)
#@title Plot Predicted Alignment Error
dpi = 100#@param {type:"integer"}
# confidence per position
plt.figure(figsize=(3*num_models,2), dpi=dpi)
for n,(model_name,value) in enumerate(paes.items()):
plt.subplot(1,num_models,n+1)
plt.title(model_name)
plt.imshow(value,label=model_name,cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.savefig(jobname+"_PAE.png")
plt.show()
#@title Plot lDDT per residue
# confidence per position
dpi = 100#@param {type:"integer"}
plt.figure(dpi=dpi)
for model_name,value in plddts.items():
plt.plot(value,label=model_name)
plt.legend()
plt.ylim(0,100)
plt.ylabel("predicted lDDT")
plt.xlabel("positions")
plt.savefig(jobname+"_lDDT.png")
plt.show()
#@title Display 3D structure {run: "auto"}
model_num = 1 #@param ["1", "2", "3", "4", "5"] {type:"raw"}
color = "chain" #@param ["chain", "lDDT", "rainbow"]
show_sidechains = False #@param {type:"boolean"}
show_mainchains = False #@param {type:"boolean"}
def plot_plddt_legend():
thresh = ['plDDT:','Very low (<50)','Low (60)','OK (70)','Confident (80)','Very high (>90)']
plt.figure(figsize=(1,0.1),dpi=100)
########################################
for c in ["#FFFFFF","#FF0000","#FFFF00","#00FF00","#00FFFF","#0000FF"]:
plt.bar(0, 0, color=c)
plt.legend(thresh, frameon=False,
loc='center', ncol=6,
handletextpad=1,
columnspacing=1,
markerscale=0.5,)
plt.axis(False)
return plt
def plot_confidence(model_num=1):
model_name = f"model_{model_num}"
plt.figure(figsize=(10,3),dpi=100)
"""Plots the legend for plDDT."""
#########################################
plt.subplot(1,2,1); plt.title('Predicted lDDT')
plt.plot(plddts[model_name])
for x in [len(query_sequence_a)]:
plt.plot([x,x],[0,100],color="black")
plt.ylabel('plDDT')
plt.xlabel('position')
#########################################
plt.subplot(1,2,2);plt.title('Predicted Aligned Error')
plt.imshow(paes[model_name], cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.xlabel('Scored residue')
plt.ylabel('Aligned residue')
#########################################
return plt
def show_pdb(model_num=1, show_sidechains=False, show_mainchains=False, color="lDDT"):
model_name = f"model_{model_num}"
pdb_filename = f"{jobname}_unrelaxed_{model_name}.pdb"
view = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js',)
view.addModel(open(pdb_filename,'r').read(),'pdb')
if color == "lDDT":
view.setStyle({'cartoon': {'colorscheme': {'prop':'b','gradient': 'roygb','min':50,'max':90}}})
elif color == "rainbow":
view.setStyle({'cartoon': {'color':'spectrum'}})
elif color == "chain":
for n,chain,color in zip(range(2),list("ABCDEFGH"),
["lime","cyan","magenta","yellow","salmon","white","blue","orange"]):
view.setStyle({'chain':chain},{'cartoon': {'color':color}})
if show_sidechains:
BB = ['C','O','N']
view.addStyle({'and':[{'resn':["GLY","PRO"],'invert':True},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.addStyle({'and':[{'resn':"GLY"},{'atom':'CA'}]},
{'sphere':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.addStyle({'and':[{'resn':"PRO"},{'atom':['C','O'],'invert':True}]},
{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
if show_mainchains:
BB = ['C','O','N','CA']
view.addStyle({'atom':BB},{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.zoomTo()
return view
show_pdb(model_num,show_sidechains, show_mainchains, color).show()
if color == "lDDT": plot_plddt_legend().show()
plot_confidence(model_num).show()
#@title Package and download results
!zip -FSr $jobname".result.zip" $jobname".log" $jobname"_msa_coverage.png" $jobname"_"*"relaxed_model_"*".pdb" $jobname"_lDDT.png" $jobname"_PAE.png"
files.download(f"{jobname}.result.zip")
###Output
_____no_output_____
###Markdown
AlphaFold2_complexesCredit to Minkyung Baek @minkbaek and Yoshitaka Moriwaki @Ag_smith for initially showing protein-complex prediction works in alphafold2.- https://twitter.com/minkbaek/status/1417538291709071362- https://twitter.com/Ag_smith/status/1417063635000598528- [script](https://github.com/RosettaCommons/RoseTTAFold/blob/main/example/complex_modeling/make_joint_MSA_bacterial.py) from rosettafold for paired alignment generation**Instructions**- For *monomers* and *homo-oligomers*, see this [notebook](https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/AlphaFold2.ipynb).- For prokaryotic protein complexes (found in operons), we recommend using the `pair_msa` option.**Limitations**- This notebook does NOT use templates or amber relax at the end for refinement.- For a typical Google-Colab-GPU (16G) session, the max total length is **1400 residues**.
###Code
#@title Input protein sequences
import os
os.environ['TF_FORCE_UNIFIED_MEMORY'] = '1'
os.environ['XLA_PYTHON_CLIENT_MEM_FRACTION'] = '2.0'
from google.colab import files
import os.path
import re
import hashlib
def add_hash(x,y):
return x+"_"+hashlib.sha1(y.encode()).hexdigest()[:5]
query_sequence_a = 'AVLKIIQGALDTRELLKAYQEEACAKNFGAFCVFVGIVRKEDNIQGLSFDIYEALLKTWFEKWHHKAKDLGVVLKMAHSLGDVLIGQSSFLCVSMGKNRKNALELYENFIEDFKHNAPIWKYDLIHNKRIYAKERSHPLKGSGLLA' #@param {type:"string"}
query_sequence_a = "".join(query_sequence_a.split())
query_sequence_a = re.sub(r'[^A-Z]','', query_sequence_a.upper())
query_sequence_b = 'MMVEVRFFGPIKEENFFIKANDLKELRAILQEKEGLKEWLGVCAIALNDHLIDNLNTPLKDGDVISLLPPVCGG' #@param {type:"string"}
query_sequence_b = "".join(query_sequence_b.split())
query_sequence_b = re.sub(r'[^A-Z]','', query_sequence_b.upper())
# Using trick from @onoda_hiroki
# https://twitter.com/onoda_hiroki/status/1420068104239910915
# "U" indicates an "UNKNOWN" residue and it will not be modeled
# But we need linker of at least length 32
query_sequence_a = re.sub(r'U+',"U"*32,query_sequence_a)
query_sequence_b = re.sub(r'U+',"U"*32,query_sequence_b)
query_sequence = query_sequence_a + query_sequence_b
if len(query_sequence) > 1400:
print(f"WARNING: For a typical Google-Colab-GPU (16G) session, the max total length is 1400 residues. You are at {len(query_sequence)}!")
jobname = 'test' #@param {type:"string"}
jobname = "".join(jobname.split())
jobname = re.sub(r'\W+', '', jobname)
jobname = add_hash(jobname, query_sequence)
# number of models to use
#@markdown ---
#@markdown ### Advanced settings
num_models = 5 #@param [1,2,3,4,5] {type:"raw"}
msa_mode = "MMseqs2" #@param ["MMseqs2","single_sequence"]
use_msa = True if msa_mode == "MMseqs2" else False
pair_msa = False #@param {type:"boolean"}
disable_mmseqs2_filter = pair_msa
#@markdown ---
with open(f"{jobname}.log", "w") as text_file:
text_file.write("num_models=%s\n" % num_models)
text_file.write("use_msa=%s\n" % use_msa)
text_file.write("msa_mode=%s\n" % msa_mode)
text_file.write("pair_msa=%s\n" % pair_msa)
text_file.write("disable_mmseqs2_filter=%s\n" % disable_mmseqs2_filter)
#@title Install dependencies
%%bash -s $use_msa
USE_MSA=$1
if [ ! -f AF2_READY ]; then
# install dependencies
pip -q install biopython
pip -q install dm-haiku
pip -q install ml-collections
pip -q install py3Dmol
# download model
if [ ! -d "alphafold/" ]; then
git clone https://github.com/deepmind/alphafold.git --quiet
mv alphafold alphafold_
mv alphafold_/alphafold .
# remove "END" from PDBs, otherwise biopython complains
sed -i "s/pdb_lines.append('END')//" /content/alphafold/common/protein.py
sed -i "s/pdb_lines.append('ENDMDL')//" /content/alphafold/common/protein.py
fi
# download model params (~1 min)
if [ ! -d "params/" ]; then
wget -qnc https://storage.googleapis.com/alphafold/alphafold_params_2021-07-14.tar
mkdir params
tar -xf alphafold_params_2021-07-14.tar -C params/
rm alphafold_params_2021-07-14.tar
fi
touch AF2_READY
fi
#@title Import libraries
# setup the model
if "IMPORTED" not in dir():
# hiding warning messages
#import warnings
#from absl import logging
#import tensorflow as tf
#warnings.filterwarnings('ignore')
#logging.set_verbosity("error")
#tf.get_logger().setLevel('ERROR')
import time
import requests
import tarfile
import sys
import numpy as np
import pickle
from string import ascii_uppercase
from alphafold.common import protein
from alphafold.data import pipeline
from alphafold.data import templates
from alphafold.model import data
from alphafold.model import config
from alphafold.model import model
from alphafold.data.tools import hhsearch
# plotting libraries
import py3Dmol
import matplotlib.pyplot as plt
IMPORTED = True
def set_bfactor(pdb_filename, bfac, idx_res, chains):
I = open(pdb_filename,"r").readlines()
O = open(pdb_filename,"w")
for line in I:
if line[0:6] == "ATOM ":
seq_id = int(line[22:26].strip()) - 1
seq_id = np.where(idx_res == seq_id)[0][0]
O.write(f"{line[:21]}{chains[seq_id]}{line[22:60]}{bfac[seq_id]:6.2f}{line[66:]}")
O.close()
def predict_structure(prefix, feature_dict, Ls, random_seed=0, num_models=5):
"""Predicts structure using AlphaFold for the given sequence."""
# Minkyung's code
# add big enough number to residue index to indicate chain breaks
idx_res = feature_dict['residue_index']
L_prev = 0
# Ls: number of residues in each chain
for L_i in Ls[:-1]:
idx_res[L_prev+L_i:] += 200
L_prev += L_i
chains = list("".join([ascii_uppercase[n]*L for n,L in enumerate(Ls)]))
feature_dict['residue_index'] = idx_res
# Run the models.
plddts = []
paes = []
unrelaxed_pdb_lines = []
relaxed_pdb_lines = []
model_names = ["model_4","model_1","model_2","model_3","model_5"][:num_models]
for n,model_name in enumerate(model_names):
model_config = config.model_config(model_name+"_ptm")
model_config.data.eval.num_ensemble = 1
model_params = data.get_model_haiku_params(model_name+"_ptm", data_dir=".")
if model_name == "model_4":
model_runner = model.RunModel(model_config, model_params)
processed_feature_dict = model_runner.process_features(feature_dict,random_seed=0)
else:
# swap params
for k in model_runner.params.keys():
model_runner.params[k] = model_params[k]
print(f"running model_{n+1}")
prediction_result = model_runner.predict(processed_feature_dict)
# cleanup to save memory
if model_name == "model_5": del model_runner
del model_params
unrelaxed_protein = protein.from_prediction(processed_feature_dict,prediction_result)
unrelaxed_pdb_lines.append(protein.to_pdb(unrelaxed_protein))
plddts.append(prediction_result['plddt'])
paes.append(prediction_result['predicted_aligned_error'])
# Delete unused outputs to save memory.
del prediction_result
# rerank models based on predicted lddt
lddt_rank = np.mean(plddts,-1).argsort()[::-1]
plddts_ranked = {}
paes_ranked = {}
print("model\tplldt\tpae_ab")
L = Ls[0]
for n,r in enumerate(lddt_rank):
plddt = plddts[r].mean()
pae_ab = (paes[r][L:,:L].mean() + paes[r][:L,L:].mean()) / 2
print(f"model_{n+1}\t{plddt:.2f}\t{pae_ab:.2f}")
unrelaxed_pdb_path = f'{prefix}_unrelaxed_model_{n+1}.pdb'
with open(unrelaxed_pdb_path, 'w') as f:
f.write(unrelaxed_pdb_lines[r])
set_bfactor(unrelaxed_pdb_path, plddts[r], idx_res, chains)
plddts_ranked[f"model_{n+1}"] = plddts[r]
paes_ranked[f"model_{n+1}"] = paes[r]
return plddts_ranked, paes_ranked
# CODE FROM MINKYUNG/ROSETTAFOLD
def read_a3m(a3m_lines):
'''parse an a3m files as a dictionary {label->sequence}'''
seq = []
lab = []
is_first = True
for line in a3m_lines.splitlines():
if line[0] == '>':
label = line.strip()[1:]
is_incl = True
if is_first: # include first sequence (query)
is_first = False
lab.append(label)
continue
if "UniRef" in label:
code = label.split()[0].split('_')[-1]
if code.startswith("UPI"): # UniParc identifier -- exclude
is_incl = False
continue
elif label.startswith("tr|"):
code = label.split('|')[1]
else:
is_incl = False
continue
lab.append(code)
else:
if is_incl:
seq.append(line.rstrip())
else:
continue
return seq, lab
# https://www.uniprot.org/help/accession_numbers
def uni2idx(ids):
'''convert uniprot ids into integers according to the structure
of uniprot accession numbers'''
ids2 = [i.split("-")[0] for i in ids]
ids2 = [i+'AAA0' if len(i)==6 else i for i in ids2]
arr = np.array([list(s) for s in ids2], dtype='|S1').view(np.uint8)
for i in [1,5,9]:
arr[:,i] -= ord('0')
arr[arr>=ord('A')] -= ord('A')
arr[arr>=ord('0')] -= ord('0')-26
arr[:,0][arr[:,0]>ord('Q')-ord('A')] -= 3
arr = arr.astype(np.int64)
coef = np.array([23,10,26,36,36,10,26,36,36,1], dtype=np.int64)
coef = np.tile(coef[None,:],[len(ids),1])
c1 = [i for i,id_ in enumerate(ids) if id_[0] in 'OPQ' and len(id_)==6]
c2 = [i for i,id_ in enumerate(ids) if id_[0] not in 'OPQ' and len(id_)==6]
coef[c1] = np.array([3, 10,36,36,36,1,1,1,1,1])
coef[c2] = np.array([23,10,26,36,36,1,1,1,1,1])
for i in range(1,10):
coef[:,-i-1] *= coef[:,-i]
return np.sum(arr*coef,axis=-1)
def run_mmseqs2(query_sequence, prefix, use_env=True, filter=False):
def submit(query_sequence, mode):
res = requests.post('https://a3m.mmseqs.com/ticket/msa', data={'q':f">1\n{query_sequence}", 'mode': mode})
return res.json()
def status(ID):
res = requests.get(f'https://a3m.mmseqs.com/ticket/{ID}')
return res.json()
def download(ID, path):
res = requests.get(f'https://a3m.mmseqs.com/result/download/{ID}')
with open(path,"wb") as out: out.write(res.content)
if filter:
mode = "env" if use_env else "all"
else:
mode = "env-nofilter" if use_env else "nofilter"
path = f"{prefix}_{mode}"
if not os.path.isdir(path): os.mkdir(path)
# call mmseqs2 api
tar_gz_file = f'{path}/out.tar.gz'
if not os.path.isfile(tar_gz_file):
out = submit(query_sequence, mode)
while out["status"] in ["RUNNING","PENDING"]:
time.sleep(1)
out = status(out["id"])
download(out["id"], tar_gz_file)
# parse a3m files
a3m_lines = []
a3m = f"{prefix}_{mode}.a3m"
if not os.path.isfile(a3m):
with tarfile.open(tar_gz_file) as tar_gz: tar_gz.extractall(path)
a3m_files = [f"{path}/uniref.a3m"]
if use_env: a3m_files.append(f"{path}/bfd.mgnify30.metaeuk30.smag30.a3m")
a3m_out = open(a3m,"w")
for a3m_file in a3m_files:
for line in open(a3m_file,"r"):
line = line.replace("\x00","")
if len(line) > 0:
a3m_lines.append(line)
a3m_out.write(line)
else:
a3m_lines = open(a3m).readlines()
return "".join(a3m_lines), len(a3m_lines)
#@title Call MMseqs2 to get MSA for each gene
Ls = [len(query_sequence_a),len(query_sequence_b)]
msas = []
deletion_matrices = []
if use_msa:
os.makedirs('tmp', exist_ok=True)
a3m_lines = {}
if pair_msa: a3m_lines_pair = {}
for c,sequence in zip(["a","b"],[query_sequence_a, query_sequence_b]):
prefix = hashlib.sha1(sequence.encode()).hexdigest()
prefix = os.path.join('tmp',prefix)
print(f"running mmseqs2 on query_{c} (use_env={True} filter={True})")
a3m_lines[c],num = run_mmseqs2(sequence, prefix, use_env=True, filter=True)
print(f"found {num} filtered sequences")
if pair_msa:
print(f"running mmseqs2 on query_{c} (use_env={False} filter={False})")
a3m_lines_pair[c],num = run_mmseqs2(sequence, prefix, use_env=False, filter=False)
print(f"found {num} unfiltered sequences")
if pair_msa:
# CODE FROM MINKYUNG/ROSETTAFOLD
msa1, lab1 = read_a3m(a3m_lines_pair["a"])
msa2, lab2 = read_a3m(a3m_lines_pair["b"])
if len(lab1) > 1 and len(lab2) > 1:
# convert uniprot ids into integers
hash1 = uni2idx(lab1[1:])
hash2 = uni2idx(lab2[1:])
# find pairs of uniprot ids which are separated by at most 10
idx1, idx2 = np.where(np.abs(hash1[:,None]-hash2[None,:]) < 10)
if idx1.shape[0] > 0:
a3m_lines["ab"] = ['>query\n%s%s\n'%(msa1[0],msa2[0])]
for i,j in zip(idx1,idx2):
a3m_lines["ab"].append(">%s_%s\n%s%s\n"%(lab1[i+1],lab2[j+1],msa1[i+1],msa2[j+1]))
msa, deletion_matrix = pipeline.parsers.parse_a3m("".join(a3m_lines["ab"]))
msas.append(msa)
deletion_matrices.append(deletion_matrix)
print("pairs found:",len(msa))
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines["a"])
msas.append([seq+"-"*Ls[1] for seq in msa])
deletion_matrices.append([mtx+[0]*Ls[1] for mtx in deletion_matrix])
msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines["b"])
msas.append(["-"*Ls[0]+seq for seq in msa])
deletion_matrices.append([[0]*Ls[0]+mtx for mtx in deletion_matrix])
else:
msas.append([query_sequence])
deletion_matrices.append([[0]*len(query_sequence)])
feature_dict = {
**pipeline.make_sequence_features(sequence=query_sequence,
description="none",
num_res=len(query_sequence)),
**pipeline.make_msa_features(msas=msas, deletion_matrices=deletion_matrices),
}
#@title Plot Number of Sequences per Position
dpi = 100#@param {type:"integer"}
# confidence per position
plt.figure(dpi=dpi)
plt.plot((feature_dict["msa"] != 21).sum(0))
plt.xlabel("positions")
plt.ylabel("number of sequences")
plt.savefig(jobname+"_msa_coverage.png")
plt.show()
#@title Predict structure
plddts, paes = predict_structure(jobname, feature_dict, Ls=Ls, num_models=num_models)
#@title Plot Predicted Alignment Error
dpi = 100#@param {type:"integer"}
# confidence per position
plt.figure(figsize=(3*num_models,2), dpi=dpi)
for n,(model_name,value) in enumerate(paes.items()):
plt.subplot(1,num_models,n+1)
plt.title(model_name)
plt.imshow(value,label=model_name,cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.savefig(jobname+"_PAE.png")
plt.show()
#@title Plot lDDT per residue
# confidence per position
dpi = 100#@param {type:"integer"}
plt.figure(dpi=dpi)
for model_name,value in plddts.items():
plt.plot(value,label=model_name)
plt.legend()
plt.ylim(0,100)
plt.ylabel("predicted lDDT")
plt.xlabel("positions")
plt.savefig(jobname+"_lDDT.png")
plt.show()
#@title Display 3D structure {run: "auto"}
model_num = 1 #@param ["1", "2", "3", "4", "5"] {type:"raw"}
color = "chain" #@param ["chain", "lDDT", "rainbow"]
show_sidechains = False #@param {type:"boolean"}
show_mainchains = False #@param {type:"boolean"}
def plot_plddt_legend():
thresh = ['plDDT:','Very low (<50)','Low (60)','OK (70)','Confident (80)','Very high (>90)']
plt.figure(figsize=(1,0.1),dpi=100)
########################################
for c in ["#FFFFFF","#FF0000","#FFFF00","#00FF00","#00FFFF","#0000FF"]:
plt.bar(0, 0, color=c)
plt.legend(thresh, frameon=False,
loc='center', ncol=6,
handletextpad=1,
columnspacing=1,
markerscale=0.5,)
plt.axis(False)
return plt
def plot_confidence(model_num=1):
model_name = f"model_{model_num}"
plt.figure(figsize=(10,3),dpi=100)
"""Plots the legend for plDDT."""
#########################################
plt.subplot(1,2,1); plt.title('Predicted lDDT')
plt.plot(plddts[model_name])
for x in [len(query_sequence_a)]:
plt.plot([x,x],[0,100],color="black")
plt.ylabel('plDDT')
plt.xlabel('position')
#########################################
plt.subplot(1,2,2);plt.title('Predicted Aligned Error')
plt.imshow(paes[model_name], cmap="bwr",vmin=0,vmax=30)
plt.colorbar()
plt.xlabel('Scored residue')
plt.ylabel('Aligned residue')
#########################################
return plt
def show_pdb(model_num=1, show_sidechains=False, show_mainchains=False, color="lDDT"):
model_name = f"model_{model_num}"
pdb_filename = f"{jobname}_unrelaxed_{model_name}.pdb"
view = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js',)
view.addModel(open(pdb_filename,'r').read(),'pdb')
if color == "lDDT":
view.setStyle({'cartoon': {'colorscheme': {'prop':'b','gradient': 'roygb','min':50,'max':90}}})
elif color == "rainbow":
view.setStyle({'cartoon': {'color':'spectrum'}})
elif color == "chain":
for n,chain,color in zip(range(2),list("ABCDEFGH"),
["lime","cyan","magenta","yellow","salmon","white","blue","orange"]):
view.setStyle({'chain':chain},{'cartoon': {'color':color}})
if show_sidechains:
BB = ['C','O','N']
view.addStyle({'and':[{'resn':["GLY","PRO"],'invert':True},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.addStyle({'and':[{'resn':"GLY"},{'atom':'CA'}]},
{'sphere':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.addStyle({'and':[{'resn':"PRO"},{'atom':['C','O'],'invert':True}]},
{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
if show_mainchains:
BB = ['C','O','N','CA']
view.addStyle({'atom':BB},{'stick':{'colorscheme':f"WhiteCarbon",'radius':0.3}})
view.zoomTo()
return view
show_pdb(model_num,show_sidechains, show_mainchains, color).show()
if color == "lDDT": plot_plddt_legend().show()
plot_confidence(model_num).show()
#@title Package and download results
!zip -FSr $jobname".result.zip" $jobname".log" $jobname"_msa_coverage.png" $jobname"_"*"relaxed_model_"*".pdb" $jobname"_lDDT.png" $jobname"_PAE.png"
files.download(f"{jobname}.result.zip")
###Output
_____no_output_____ |
04. Joining Data with pandas/Joining Data with Pandas.ipynb | ###Markdown
1.1 - Inner join Tables = DataFrames \Merging = Joining
###Code
import pandas as pd
ward = pd.read_csv('Ward_Offices.csv')
print(ward.head())
print(ward.shape)
print(ward.columns)
wards = ward[['WARD','ALDERMAN','ADDRESS','ZIPCODE']].copy()
census = ward[['WARD','CITY','STATE','WARD PHONE','EMAIL']]
###Output
_____no_output_____
###Markdown
> Merging tables > Inner join
###Code
wards_census = wards.merge(census,on='WARD')
wards_census.head()
print(wards_census.columns)
###Output
Index(['WARD', 'ALDERMAN', 'ADDRESS', 'ZIPCODE', 'CITY', 'STATE', 'WARD PHONE',
'EMAIL'],
dtype='object')
###Markdown
> Suffixes
###Code
wards_census = wards.merge(census, on='WARD', suffixes=('_cen','_ward'))
print(wards_census.head())
print(wards_census.shape)
###Output
WARD ALDERMAN ADDRESS \
0 33 Rodriguez Sanchez, Rossana 3001 West Irving Park Road
1 17 Moore, David H. 1344 West 79th Street
2 44 Tunney, Thomas 3223 North Sheffield Avenue, Suite A
3 37 Mitts, Emma 5344 West North Avenue
4 4 King, Sophia D. 435 East 35th Street
ZIPCODE CITY STATE WARD PHONE EMAIL
0 60618.0 Chicago IL (773) 840-7880 [email protected]
1 60636.0 Chicago IL (773) 783-3672 [email protected]
2 60657.0 Chicago IL (773) 525-6034 [email protected]
3 60651.0 Chicago IL (773) 379-0960 [email protected]
4 60616.0 Chicago IL (773) 536-8103 [email protected]
(50, 8)
###Markdown
1.2 - One to many relationships
###Code
licenses = pd.read_csv('Business_Licenses.csv')
print(licenses.head())
print(licenses.shape)
ward_licenses = wards.merge(licenses, on='ward', suffixes=('_ward','_lic'))
print(ward_licenses.head())
###Output
_____no_output_____
###Markdown
1.3 - Merging multiple DataFrames > Theoretical merge
###Code
grants_licenses = grants.merge(licenses, on='zip')
print(grants_licenses.loc[grants_licenses['business']=="REGGIE'S BAR & GRILL",['grant','company','account','ward','business']])
###Output
_____no_output_____
###Markdown
> Single merge
###Code
grants.merge(licenses, on=['address','zip'])
###Output
_____no_output_____
###Markdown
> Merging multiple tables
###Code
grants_licenses_ward = grants.merge(licenses, on=['address','zip']).merge(wards, on='ward', suffixes=('_bus','_ward'))
grants_licenses_ward.head()
###Output
_____no_output_____
###Markdown
> Results
###Code
import matplotlib.pyplot as plt
grant_licenses_ward.groupby('ward').agg('sum').plot(kind='bar', y='grant')
plt.show()
###Output
_____no_output_____
###Markdown
> Merging even more
###Code
'''
Three tables:
df1.merge(df2, on='col').merge(df3, on='col')
Four tables:
df1.merge(df2, on='col').merge(df3, on='col').merge(df4, on='col')
'''
###Output
_____no_output_____
###Markdown
2.1 - Left join
###Code
movies = pd.read_csv('tmdb-movies.csv')
movies.shape
movies_taglines = movies.merge(taglines, on='id', how='left')
print(movies_taglines.head())
###Output
_____no_output_____
###Markdown
2.2 - Other joins > Right Join
###Code
tv_movies = movies.merge(tv_genre, how='right',left_on='id', right_on='movie_id')
print(tv_movies.head())
###Output
_____no_output_____
###Markdown
> Outer Join
###Code
family_comedy = family.merge(comedy, on='movie_id', how='outer',suffixes=('_fam','_com'))
print(family_comedy)
###Output
_____no_output_____
###Markdown
2.3 - Merging a table to itself
###Code
original_sequels = sequels.merge(sequels, left_on='sequel', right_on='id',suffixes=('_org','_seq'))
print(original_sequels.head())
###Output
_____no_output_____
###Markdown
> Merging a table to itself with left join
###Code
original_sequels = sequels.merge(sequels, left_on='sequel', right_on='id',how='left', suffixes=('_org','_seq'))
print(original_sequels.head())
###Output
_____no_output_____
###Markdown
2.4 - Merging on indexes > Setting an index
###Code
movies = pd.read_csv('tmdb-movies.csv', index_col=['id'])
print(movies.head())
###Output
imdb_id popularity budget revenue \
id
135397 tt0369610 32.985763 150000000 1513528810
76341 tt1392190 28.419936 150000000 378436354
262500 tt2908446 13.112507 110000000 295238201
140607 tt2488496 11.173104 200000000 2068178225
168259 tt2820852 9.335014 190000000 1506249360
original_title \
id
135397 Jurassic World
76341 Mad Max: Fury Road
262500 Insurgent
140607 Star Wars: The Force Awakens
168259 Furious 7
cast \
id
135397 Chris Pratt|Bryce Dallas Howard|Irrfan Khan|Vi...
76341 Tom Hardy|Charlize Theron|Hugh Keays-Byrne|Nic...
262500 Shailene Woodley|Theo James|Kate Winslet|Ansel...
140607 Harrison Ford|Mark Hamill|Carrie Fisher|Adam D...
168259 Vin Diesel|Paul Walker|Jason Statham|Michelle ...
homepage director \
id
135397 http://www.jurassicworld.com/ Colin Trevorrow
76341 http://www.madmaxmovie.com/ George Miller
262500 http://www.thedivergentseries.movie/#insurgent Robert Schwentke
140607 http://www.starwars.com/films/star-wars-episod... J.J. Abrams
168259 http://www.furious7.com/ James Wan
tagline \
id
135397 The park is open.
76341 What a Lovely Day.
262500 One Choice Can Destroy You
140607 Every generation has a story.
168259 Vengeance Hits Home
keywords \
id
135397 monster|dna|tyrannosaurus rex|velociraptor|island
76341 future|chase|post-apocalyptic|dystopia|australia
262500 based on novel|revolution|dystopia|sequel|dyst...
140607 android|spaceship|jedi|space opera|3d
168259 car race|speed|revenge|suspense|car
overview runtime \
id
135397 Twenty-two years after the events of Jurassic ... 124
76341 An apocalyptic story set in the furthest reach... 120
262500 Beatrice Prior must confront her inner demons ... 119
140607 Thirty years after defeating the Galactic Empi... 136
168259 Deckard Shaw seeks revenge against Dominic Tor... 137
genres \
id
135397 Action|Adventure|Science Fiction|Thriller
76341 Action|Adventure|Science Fiction|Thriller
262500 Adventure|Science Fiction|Thriller
140607 Action|Adventure|Science Fiction|Fantasy
168259 Action|Crime|Thriller
production_companies release_date \
id
135397 Universal Studios|Amblin Entertainment|Legenda... 6/9/15
76341 Village Roadshow Pictures|Kennedy Miller Produ... 5/13/15
262500 Summit Entertainment|Mandeville Films|Red Wago... 3/18/15
140607 Lucasfilm|Truenorth Productions|Bad Robot 12/15/15
168259 Universal Pictures|Original Film|Media Rights ... 4/1/15
vote_count vote_average release_year budget_adj revenue_adj
id
135397 5562 6.5 2015 1.379999e+08 1.392446e+09
76341 6185 7.1 2015 1.379999e+08 3.481613e+08
262500 2480 6.3 2015 1.012000e+08 2.716190e+08
140607 5292 7.5 2015 1.839999e+08 1.902723e+09
168259 2947 7.3 2015 1.747999e+08 1.385749e+09
###Markdown
> MultiIndex datasets
###Code
samuel = pd.read_csv('samuel.csv',index_col=['movie_id','cast_id'])
print(samuel.head())
samuel_casts = samuel.merge(casts, on=['movie_id','cast_id'])
print(samuel_casts.head())
print(samuel_casts.shape)
###Output
_____no_output_____
###Markdown
> Index merge with left_on and right_on
###Code
movies_genres = movies.merge(movie_to_genres, left_on='id', left_index=True,right_on='movie_id', right_index=True)
print(movies_genres.head())
###Output
_____no_output_____
###Markdown
3.1 - Filtering joins > Step 1 - semi-join
###Code
genres_tracks = genres.merge(top_tracks, on='gid')
print(genres_tracks.head())
###Output
_____no_output_____
###Markdown
> Step 2 - semi-join
###Code
genres['gid'].isin(genres_tracks['gid'])
###Output
_____no_output_____
###Markdown
> Step 3 - semi-join
###Code
genres_tracks = genres.merge(top_tracks, on='gid')
top_genres = genres[genres['gid'].isin(genres_tracks['gid'])]
print(top_genres.head())
###Output
_____no_output_____
###Markdown
> Step 1 - anti-join
###Code
genres_tracks = genres.merge(top_tracks, on='gid', how='left', indicator=True)
print(genres_tracks.head())
###Output
_____no_output_____
###Markdown
> Step 2 - anti-join
###Code
gid_list = genres_tracks.loc[genres_tracks['_merge'] == 'left_only','gid']
print(gid_list.head())
###Output
_____no_output_____
###Markdown
> Step 3 - anti-join
###Code
genres_tracks = genres.merge(top_tracks, on='gid', how='left', indicator=True)
gid_list = genres_tracks.loc[genres_tracks['_merge'] =='left_only','gid']
non_top_genres = genres[genres['gid'].isin(gid_list)]
print(non_top_genres.head())
###Output
_____no_output_____
###Markdown
3.2 - Concatenate DataFrames together vertically > Basic concatenation
###Code
pd.concat([inv_jan, inv_feb, inv_mar])
###Output
_____no_output_____
###Markdown
> Ignoring the index
###Code
pd.concat([inv_jan, inv_feb, inv_mar],ignore_index=True)
###Output
_____no_output_____
###Markdown
> Setting labels to original tables
###Code
pd.concat([inv_jan, inv_feb, inv_mar], ignore_index=False,keys=['jan','feb','mar'])
###Output
_____no_output_____
###Markdown
> Concatenate tables with different column names
###Code
pd.concat([inv_jan, inv_feb],sort=True)
pd.concat([inv_jan, inv_feb],join='inner')
Simplied version ofthe .concat() method
Supports: ignore_index , and sort
Does Not Support: keys and join
Always join = outer
###Output
_____no_output_____
###Markdown
> Using append method
###Code
inv_jan.append([inv_feb, inv_mar],ignore_index=True,sort=True)
###Output
_____no_output_____
###Markdown
3.3 - Verifying integrity > Merge validate: one_to_one
###Code
tracks.merge(specs, on= 'tid',validate='one_to_one')
###Output
_____no_output_____
###Markdown
> Merge validate: one_to_many
###Code
albums.merge(tracks, on='aid',validate='one_to_many')
###Output
_____no_output_____
###Markdown
> Verifying concatenation: example
###Code
pd.concat([inv_feb, inv_mar],verify_integrity=False)
pd.concat([inv_feb, inv_mar],verify_integrity=True)
###Output
_____no_output_____
###Markdown
4.1 - Using merge_ordered() > Merging stock data
###Code
import pandas as pd
pd.merge_ordered(appl, mcd, on='date', suffixes=('_aapl','_mcd'))
###Output
_____no_output_____
###Markdown
> Forward fill example
###Code
pd.merge_ordered(appl, mcd, on='date',suffixes=('_aapl','_mcd'),fill_method='ffill')
###Output
_____no_output_____
###Markdown
4.2 - Using merge_asof()
###Code
pd.merge_asof(visa, ibm, on='date_time',suffixes=('_visa','_ibm'))
###Output
_____no_output_____
###Markdown
> merge_asof() example with direction
###Code
pd.merge_asof(visa, ibm, on=['date_time'],suffixes=('_visa','_ibm'),direction='forward')
###Output
_____no_output_____
###Markdown
4.3 - Selecting data with .query() > Querying on a multiple conditions,"and","or"
###Code
stocks.query('nike > 90 and disney < 140')
stocks.query('nike > 96 or disney < 98')
###Output
_____no_output_____
###Markdown
> Using .query() to select text
###Code
stocks_long.query('stock=="disney" or (stock=="nike" and close < 90)')
###Output
_____no_output_____
###Markdown
4.4 - Reshaping data with .melt()
###Code
social_fin_tall = social_fin.melt(id_vars=['financial','company'])
print(social_fin_tall.head(10))
###Output
_____no_output_____
###Markdown
> Melting with value_vars
###Code
social_fin_tall = social_fin.melt(id_vars=['financial','company'],value_vars=['2018','2017'])
print(social_fin_tall.head(9))
###Output
_____no_output_____
###Markdown
> Melting with column names
###Code
social_fin_tall = social_fin.melt(id_vars=['financial','company'],value_vars=['2018','2017'],var_name=['year'], value_name='dollars')
print(social_fin_tall.head(8))
###Output
_____no_output_____ |
docs/textgenrnn-synthesize.ipynb | ###Markdown
textgenrnn 1.5 Model Synthesisby [Max Woolf](http://minimaxir.com)*Max's open-source projects are supported by his [Patreon](https://www.patreon.com/minimaxir). If you found this project helpful, any monetary contributions to the Patreon are appreciated and will be put to good creative use.* IntroYou can predict texts from multiple models simultaneously using the `synthesize` function, allowing the creation of texts which incorporate multiple styles without "locking" into a given style.You will get better results if the input models are trained with high `dropout` (0.8-0.9)
###Code
from textgenrnn import textgenrnn
from textgenrnn.utils import synthesize, synthesize_to_file
m1 = "gaming"
m2 = "Programmerhumor"
def create_textgen(model_name):
return textgenrnn(weights_path='{}_weights.hdf5'.format(model_name),
vocab_path='{}_vocab.json'.format(model_name),
config_path='{}_config.json'.format(model_name),
name=model_name)
model1 = create_textgen(m1)
model2 = create_textgen(m2)
###Output
_____no_output_____
###Markdown
You can pass a `list` of models to generate from to `synthesize`. The rest of the input parameters are the same as `generate`.
###Code
models_list = [model1, model2]
synthesize(models_list, n=5, progress=False)
###Output
I wonder why the first thing I do not use the secret of the game of all the games of all the games of all the games of all the games of all the games of all the games of all the games of all the games of the same console interview with a game that they said...
When you play a game that you don't think we are doing the requirements
The story of my childhood
Playing a game for the first time and it still works
When you finally finish your friends.
###Markdown
The model generation order is randomized for each creation. It may be worthwhile to double or triple up on models so that the text can generate from the same "model" for multiple tokens.e.g. `models_list*3` triples the number of input models, allowing generation strategies such as `[model1, model1, model2, model1, model2, model2]`.
###Code
synthesize(models_list*3, n=5, progress=False)
###Output
When you have to see this on my favorite class
This is what happens when you can use the day off the game of the new code for my company for this post
The struggle of the real world but you like the same gun while we're talking about the Star Trate Internship and Mortal Stack of Debugger - PlayStation 4 has some commit history of the real time so many skyrim the struggle
When you get a new project to a console game for a sequel to the post, because you don't know what you see well because it's the first time are the second of the "Source code" in the world
How to properly delete the final statement of code and still has been announced to program
###Markdown
You can also `synthesize_to_file`.
###Code
synthesize_to_file(models_list*3, "synthesized.txt", n=10)
###Output
100%|██████████| 10/10 [00:10<00:00, 1.04it/s]
###Markdown
You can also use more than 2 models. One approach is to create a weighted average, for example, create a model that is 1/2 `model1`, 1/4 `model2`, 1/4 `model3`.
###Code
m3 = "PrequelMemes"
model3 = create_textgen(m3)
models_list2 = [model1, model1, model2, model3]
synthesize(models_list2, n=5, progress=False)
###Output
So I was playing Skyrim and you could find a startup game I made a complete controller at work today. Here's a feature
I have a complete face of a game on a short classic today, I have a second one
When you finally go to the same time to get the game of all time.
The only man who game developers start a series space because I'm looking through the comments and I still have a lot of powerful up when the game was wrong...
When you continue but you can play in the world when I have a good day at the parents for me and my company wants to sell your ass about the rest of the console assignment and got into a meme with a bug in the game
###Markdown
For character-level models, the models "switch" by default after a list of `stop_tokens`, which by default are a space character or a newline. You can override this behavior by passing `stop_tokens=[]` to a synthesize function, which will cause the model to switch after each character (note: may lead to *creative* results!)Word-level models will always switch after each generated token.
###Code
synthesize(models_list2, n=5, progress=False, stop_tokens=[])
###Output
The first time an all
Complete Discord Secrets After Made a Couple Front Even So I Love A Search to Buy the Story Scorping character set on the star of the End of the Game of The Year To Post Out of Shenm
I hope the bottom release was a good church and a promil of the party and save your part of my childhootg.
When you see a sequerter in the comments
It all makes me see a prequel problem with a monster from the past!
|
Unit_2_Build/VGD_build_notebook.ipynb | ###Markdown
Exploration TOC for explore. I made a copy of the df for each iteration, the problem with this would be memory management for much larger datasets, but I'm not sure if there are some optimizations going in the background. (querying a single database in memory, and returning the results, or actually creating a copy to place in memory? What is happening in the hardware?)df:usage...explore: plotting the difference between user and critic scores. User vs Critic Scores
###Code
import seaborn as sns
#User score vs critic score:
#Lets get rid of nans
explore = train
explore.shape
explore = explore.dropna(subset = ['user_count','critic_count'])
explore.shape
explore.critic_score = explore.critic_score/10
explore.user_score = explore.user_score.astype(float)
x = list(range(len(explore)))
y1 = explore.critic_score.to_list()
y2 = explore.user_score.to_list()
y1, y2 = zip(*sorted(zip(y1,y2)))
# the other way to do this is probably through sortby in pandas
sns.lineplot(x = x, y = y1, data= explore)
sns.lineplot(x = x, y = y2, data = explore)
###Output
_____no_output_____
###Markdown
This tells me that user and critic scores don't really track. This may be due to a bias in the way that these scores are collected. I would guess that on average the closer a game is to average the better the scores line up compared to the endsIt does seem like every "step" of a crtitic_score has a range of possible values for user_score. I can look at the spread of user for every critic, and see if I can glean some information from that. (I do this below)This does provide an interesting feature for a different model/ maybe one for this model. We could look at if the critic score, and the user score match for specific games. And we could set that as the target for the model. This model would then produce the boolean/number that describes the difference between critic score and normal score, and then it could be used in another model to predict the critic score. You wouldn't have leakage because each model is still working on the base input information. I also don't think that the value would be a huge predictor because the error associated with the first model would be noise in the second model. The feature wouldn't track perfectly becuase the value that its using is an imperfect value. If there is a manual way to tone down the effects of that specific feature I could also ensure that the feature isn't overused by dampening the effects of my "engineered" feature in the model.
###Code
#Pandas sortby test:
explore.sort_values("critic_score").head()
#From here I can just take critic score, and user score turn them into lists
#and send them to my graphing function. Might be more memory efficient my previous
#way.
explore.critic_score.unique().shape
#There are only 80 "steps" to the graph.
critic_score_values = sorted(explore.critic_score.unique())
# critic_score_values
type(critic_score_values[0])
critic_score_value = 1.9
mask = explore.critic_score==critic_score_value
mask
a = explore[mask]
amean = a.user_score.mean()
amean
astd= a.user_score.std()
astd
###Output
_____no_output_____
###Markdown
User/Critic difference graph
###Code
import matplotlib.pyplot as plt
import seaborn as sns
# critic_score_values
ameans = []
astds = []
for my_value in critic_score_values:
mask = explore.critic_score == my_value
a = explore[mask]
amean = a.user_score.mean()
astd = a.user_score.std()
ameans.append(amean)
astds.append(astd)
graph_zip = list(zip(critic_score_values,ameans,astds))
#slice doesn't work well here beacause I have a list of tuples not a list of lists
#but I can see that I have reasonable values by uncommenting the following:
# graph_zip
#plus and minus one std from mean
p1_std = list(np.array(ameans)+np.array(astds))
m1_std = list(np.array(ameans)-np.array(astds))
x = list(range(len(ameans)))
sns.lineplot(x,critic_score_values, color= 'red')
sns.lineplot(x,ameans, color = '#48639c')
# sns.lineplot(x,astds)
sns.lineplot(x,p1_std,color = '#8a9bc0')
sns.lineplot(x,m1_std,color = '#8a9bc0')
#Add a distribution on top of this to describe number of
#values used to caluclate the mean
#Plot 1 : Less scores in the lower end of scores <- historgram plot
#Plot 2 the plot below:
#Scatter plot user on 1 axis and critic on another axis (use alpha/opacity) (apply some jitter?< gets rid of overlapping points for non-continousu variables)
#bivariant distribitions in seaborn (hexbin plot<-- try this?)
###Output
_____no_output_____
###Markdown
The above graph shows the relationship between critic scores (blue) and user scores (darker orange) along with +- 1 standard deviation from the user scores (lighter orange). Notice that on average user scores are higher for ratings below around 7/10, and lower for ratings above 7/10. Now this could be for a statistical reason. There are simply fewer scores below 6 than there are above 6 so there is more uncertainty associated with lower user scores. This uncertainty is reflected in the standard deviation curves. Aside from this bias the data can also be interpereted as follows. It seems like the games that critics love, the games that score the best according to critics are not appreciated by users. This could be for a variety of reasons. In some cases it may be that the "hype" for the game was overdone, leading to high expectations. When those expectations weren't met the Now I'm not completely sure which of these reasons is the truth, but if reason () is true that would mean that a truly great game should be given a slightly lower score in order to attract more users? Personally I have sometimes felt that the games I was supposed to like, the games that were "classics" didn't live up to my expectations, oftentimes the games I enjoy the most are the games that are unexpectedly good. Part of their value lies in the discovery of the game. My brain doesn't judge while I play, it just absorbs, and that makes the experience better for me. Histograms
###Code
sns.distplot(train.critic_score)
#User score is an object so I have to change that before
print(train.critic_score.dtype)
print(train.user_score.dtype) # O = object
#interesting that these two are different:
print(train.user_score.dtype)
train.user_score.dtype
print(train.user_score.shape)
mask = train.user_score=="tbd"
print(train.user_score[~mask].shape)
user_score_float = train.user_score[~mask].dropna().astype(float)*10
user_score_float
import matplotlib.pyplot as plt
sns.distplot(user_score_float,label='User Score')
#adding the critic score to this user score:
ax = sns.distplot(train.critic_score,label='Critic Score')
ax.legend()
ax.set_xlabel('Score (Out of 100)')
ax.set_ylabel('Kernel Density Estimate ("%" of Data)')
plt.show()
plt.savefig('Score_histogram.jpeg',format='jpeg',dpi=1200)
###Output
_____no_output_____
###Markdown
Hypothesis testing would tell me if this difference between critics and users is due to randomness or not. Scatter Plots:
###Code
#lets use the mask I made to make the critic and user the same length:
mask = train.user_score=="tbd"
user_score_float = train.user_score[~mask].dropna().astype(float)*10
critic_score_masked = train.critic_score[~mask]
#now we can plot them against each other:
sns.scatterplot(critic_score_masked,user_score_float,alpha = 0.2)
###Output
_____no_output_____
###Markdown
Experiment to use more seaborn features
###Code
#I can just drop user_count = NAN
print(train.shape)
train_scatter = train.dropna(subset=['user_count'])
train_scatter.shape
#create a scaled user score column
train_scatter['user_score_scaled'] = train_scatter.user_score*10
# fig,ax = plt.subplots()
# sns.scatterplot('critic_score','user_score_scaled',data = train_scatter,ax=ax);
# ax.get_ylabel()
#I have to fix the ylabels in the graph
train.head()
###Output
_____no_output_____
###Markdown
User Scores Model:
###Code
#Setting up matrices for models:
target = 'user_score'
#the first three are leaky and the rest are high cardinality.
drop = ['critic_count','critic_score','user_count','name','publisher','developer']
features = train.columns.drop(target)
features = features.drop(drop)
features
X_train = train[features]
y_train = train[target]
y_train_user = y_train
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
#Some verification:
train.shape,X_train.shape
y_val
###Output
_____no_output_____
###Markdown
Baseline
###Code
from sklearn.metrics import mean_absolute_error as mae
from sklearn.metrics import r2_score
X_train.head()
y_baseline = [y_train.mean()]*len(y_val)
print(f'Mean absolute error: {mae(y_val,y_baseline)}')
print(f'R2 score: {r2_score(y_val,y_baseline)}')
###Output
Mean absolute error: 1.1189890267378646
R2 score: -0.0005219002190244293
###Markdown
Model 1 (Tree Based Regression)
###Code
#High cardinality columns... might have to get rid of them or process them somehow.
# print('Uniques in "developer" column',len(train.developer.unique()))
# train.developer.unique()
# print('Uniques in "publisher" column',len(train.publisher.unique()))
# train.publisher.unique()
import category_encoders as ce
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.model_selection import RandomizedSearchCV
process = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
)
X_train_processed = process.fit_transform(X_train)
X_val_processed = process.transform(X_val)
model = RandomForestRegressor(
n_jobs = -2,
n_estimators=100,
criterion='mae',
)
model.fit(X_train_processed,y_train)
print('Training Error:', model.score(X_train_processed,y_train))
print('Validation Error:',model.score(X_val_processed,y_val))
import seaborn as sns
#Visual Error observation:
a = y_train
p = model.predict(X_train_processed)
x = list(range(len(y_train)))
a1,p1 = zip(*sorted(zip(a,p)))
sns.lineplot(x,a1)
sns.lineplot(x,p1)
import shap
def shap_plot(row_number):
row = X_train.iloc[[row_number]]
row_processed = process.transform(row)
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row_processed)
shap.initjs()
return shap.force_plot(
base_value = explainer.expected_value,
shap_values = shap_values,
features = row,
)
shap_plot(3972)
critic_score_predictions = model.predict(X_train_processed)
critic_score_predictions.shape
###Output
_____no_output_____
###Markdown
Model 2 (Linear Regression)
###Code
import category_encoders as ce
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
process = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
)
X_train_processed = process.fit_transform(X_train)
X_val_processed = process.transform(X_val)
model = LinearRegression(
n_jobs = -2,
)
model.fit(X_train_processed,y_train)
print('Training Error:', model.score(X_train_processed,y_train))
print('Validation Error:',model.score(X_val_processed,y_val))
list(zip(X_train.columns,model.coef_))
import seaborn as sns
#Visual Error observation:
a = y_train
p = model.predict(X_train_processed)
x = list(range(len(y_train)))
a1,p1 = zip(*sorted(zip(a,p)))
sns.lineplot(x,a1)
sns.lineplot(x,p1)
###Output
_____no_output_____
###Markdown
Guessing Critic Scores
###Code
target = 'critic_score'
#the first three are leaky(ish) and "name" is high cardinality.
drop = ['critic_count','user_score','user_count','name','publisher','developer']
features = train.columns.drop(target)
features = features.drop(drop)
features
train[['na_players','eu_players','jp_players','other_players','global_players']]
np.random.random(len(train))
X_train = train[features]
y_train = train[target]
y_train_critic = y_train
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
#Some verification:
train.shape,X_train.shape
###Output
_____no_output_____
###Markdown
Baseline
###Code
from sklearn.metrics import mean_absolute_error as mae
from sklearn.metrics import r2_score
X_train.head()
y_baseline = [y_train.mean()]*len(y_train)
print("TRAIN SET")
print(f'Mean absolute error: {mae(y_train,y_baseline)}')
print(f'R2 score: {r2_score(y_train,y_baseline)}')
y_baseline = [y_train.mean()]*len(y_val)
print("VALIDATION SET")
print(f'Mean absolute error: {mae(y_val,y_baseline)}')
print(f'R2 score: {r2_score(y_val,y_baseline)}')
y_baseline = [y_train.mean()]*len(y_test)
print("TEST SET")
print(f'Mean absolute error: {mae(y_test,y_baseline)}')
print(f'R2 score: {r2_score(y_test,y_baseline)}')
###Output
TRAIN SET
Mean absolute error: 11.030449019534043
R2 score: 0.0
VALIDATION SET
Mean absolute error: 10.592915538748247
R2 score: -0.0016937060164436968
TEST SET
Mean absolute error: 11.287144771172695
R2 score: -0.00048625404405999717
###Markdown
Model 1 (Tree Based Regression)
###Code
#High cardinality columns... might have to get rid of them or process them somehow.
# print('Uniques in "developer" column',len(train.developer.unique()))
# train.developer.unique()
# print('Uniques in "publisher" column',len(train.publisher.unique()))
# train.publisher.unique()
import category_encoders as ce
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.model_selection import RandomizedSearchCV
process = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
)
X_train_processed = process.fit_transform(X_train)
X_val_processed = process.transform(X_val)
X_test_processed = process.fit_transform(X_test)
model = RandomForestRegressor(
n_jobs = -2,
n_estimators=100,
criterion='mae',
)
model.fit(X_train_processed,y_train)
print('Training Error:', model.score(X_train_processed,y_train))
print('Validation Error:',model.score(X_val_processed,y_val))
print('Test Error:',model.score(X_test_processed,y_test))
import seaborn as sns
#Visual Error observation:
a = y_train
p = model.predict(X_train_processed)
x = list(range(len(y_train)))
a1,p1 = zip(*sorted(zip(a,p)))
sns.lineplot(x,a1)
sns.lineplot(x,p1)
###Output
_____no_output_____
###Markdown
Model 2 (Linear Regression)
###Code
import category_encoders as ce
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
process = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
)
X_train_processed = process.fit_transform(X_train)
X_val_processed = process.transform(X_val)
X_test_processed = process.transform(X_test)
lr_model = LinearRegression(
n_jobs = -2,
)
lr_model.fit(X_train_processed,y_train)
print('Training Error:', lr_model.score(X_train_processed,y_train))
print('Validation Error:',lr_model.score(X_val_processed,y_val))
#Lets calculate mean absolute error:
from sklearn.metrics import mean_absolute_error as mae
print('Training Error:', mae(lr_model.predict(X_train_processed),y_train))
print('Validation Error:',mae(lr_model.predict(X_val_processed),y_val))
print('Test Error:', mae(lr_model.predict(X_test_processed),y_test))
import shap
def shap_plot(row_number):
row = X_train.iloc[[row_number]]
row_processed = process.transform(row)
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row_processed)
shap.initjs()
return shap.force_plot(
base_value = explainer.expected_value,
shap_values = shap_values,
features = row,
)
shap_plot(3972)
user_score_predictions = model.predict(X_train_processed)
user_score_predictions.shape
###Output
_____no_output_____
###Markdown
Running outputs through another modelI have predictions for the user score and for the critic score. I want to create a new model that takes these two values, and uses them, and trends in them to make a prediction for critic_score.
###Code
user_score_predictions.shape,critic_score_predictions.shape,user_score_predictions,critic_score_predictions
y_train.shape,y_train_critic
y_train_critic_reindex = y_train_critic.reset_index().drop('index',1)
y_train_critic_reindex
predictions = list(zip(critic_score_predictions,user_score_predictions))
# predictions
predictions_df = pd.DataFrame(predictions,columns=['critic_score_pred','user_score_pred'])
predictions_df
preds_wtarget = pd.concat([predictions_df,y_train_critic_reindex],1)
preds_wtarget
#I had some filtering in this step before, but I moved it to the wrangle function, and I no longer needed it here
#mr = model ready
preds_mr = preds_wtarget
preds_mr.dtypes
preds_mr.isna().sum()
train_s,test_s = train_test_split(preds_mr)
target = 'critic_score'
features = preds_mr.columns.drop(target)
X_train = train_s[features]
y_train = train_s[target]
X_test = test_s[features]
y_test = test_s[target]
###Output
_____no_output_____
###Markdown
Linear Regressor
###Code
from sklearn.linear_model import LinearRegression
lr_synthesis = LinearRegression()
lr_synthesis.fit(X_train,y_train)
lr_synthesis.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
Random Forest Regressor
###Code
from sklearn.ensemble import RandomForestRegressor
rfr_synthesis = RandomForestRegressor(
n_jobs = -2,
n_estimators=100,
criterion='mae',
)
rfr_synthesis.fit(X_train,y_train)
rfr_synthesis.score(X_test,y_test)
import seaborn as sns
#Visual Error observation:
a = y_train
p = rfr_synthesis.predict(X_train)
x = list(range(len(y_train)))
a1,p1 = zip(*sorted(zip(a,p)))
sns.lineplot(x,a1)
sns.lineplot(x,p1)
preds_mr
import shap
row_num = 1457
row = preds_mr.iloc[[row_num]]
explainer = shap.TreeExplainer(rfr_synthesis)
shap_values = explainer.shap_values(row)
observation = row[features]
actual = preds_mr.critic_score.iloc[row_num]
prediction = rfr_synthesis.predict(observation)
print(f'Actual Value= {actual}')
print(f'Prediction = {prediction[0]:,.02f}')
print(f'Error = {abs(prediction[0]-actual):,.02f}')
shap.initjs()
shap.force_plot(
base_value = explainer.expected_value,
shap_values = shap_values,
features = row)
###Output
_____no_output_____
###Markdown
I was having a lot of trouble with the "predict" function. this was because when I take iloc[] I get a pandas series I need to do iloc[[]] to get a pandas dataframe, and a pandas dataframe can be passed as a parameter to model.predict, but a series cannot, and throws up various errors, based on the state of the series. The code below shows how I finally figured it out.
###Code
#pandas Series
preds_mr.iloc[row_num]
#Pandas DataFrame
preds_mr.iloc[[row_num]]
#Pandas DataFrame without target
#At this state it is ready to go into model.predict()
preds_mr.iloc[[row_num]][features]
###Output
_____no_output_____ |
Notebooks/ML-LinearRegression-Models-with coorelation.ipynb | ###Markdown
Filter Outliers To git rid of outliers, we need to filter: * Properties sold before 2019 * landsize over 2000 sqm * carspace,bedroom over 6 * Properties price over 3M
###Code
df['sold_date']= df['sold_date'].astype('datetime64[ns]')
df['sold_date'] = df['sold_date'].dt.strftime('%Y')
df['sold_date'] = df['sold_date'].astype('int')
df['rent_date']= df['rent_date'].astype('datetime64[ns]')
df['rent_date'] = df['rent_date'].dt.strftime('%Y')
df['rent_date'] = df['rent_date'].astype('int')
df.dtypes
# After2019_df = df[(df['sold_date']>2019)]
# After2019_df
Less2000sqm_df = df[(df['land_size']<2000)]
Less2000sqm_df
carspaceLessThan7_df = Less2000sqm_df[(Less2000sqm_df['car_space']<7)]
carspaceLessThan7_df
BedroomsLessthan7_df = carspaceLessThan7_df[(carspaceLessThan7_df['bedrooms']<7)]
BedroomsLessthan7_df
filtered_df = BedroomsLessthan7_df[(BedroomsLessthan7_df['price']<3000000)]
###Output
_____no_output_____
###Markdown
Data Preporcessing
###Code
# filtered_df = df
#only getting landed properties for the machine learning
house = filtered_df[(filtered_df['property_type']=='House') | (filtered_df['property_type']=='Villa') | (filtered_df['property_type']=='Townhouse')]
house
house.columns
house = pd.get_dummies(house, columns=["suburb"])
house
house = pd.get_dummies(house, columns=["property_type"])
house
# for i in range(0,len(house['sale_id'])):
# if house.iloc[i,13] =='Villa':
# house.iloc[i,26] = 1
# elif house.iloc[i,13] =='Townhouse':
# house.iloc[i,13]= 2
# elif house.iloc[i,13] =='House':
# house.iloc[i,13] = 3
# # house.iloc[i,15] = int(house.iloc[i,15].split('-')[0])
# # if house.iloc[i,15]== 1900:
# # house.iloc[i,15] = 0
# house
# house['median_income'] = ''
# for i in range(0,len(house['sale_id'])):
# if house.iloc[i,19] =='Perth':
# house.iloc[i,20] = 1750
# elif house.iloc[i,19] =='Crawley':
# house.iloc[i,20]= 1145
# elif house.iloc[i,19] =='Nedlands':
# house.iloc[i,20] = 2217
# elif house.iloc[i,19] =='Northbridge':
# house.iloc[i,20] = 1385
# elif house.iloc[i,19] =='Northbridge':
# house.iloc[i,20] = 1385
# house['perth'] = ''
# house['east_perth'] = ''
# house['west_perth'] = ''
# house['northbridge'] = ''
# house['crawley'] = ''
# house['nedlands'] = ''
# house['Villa'] = ''
# house['Townhouse'] = ''
# house['House'] = ''
# for i in range(0,len(house['sale_id'])):
# if house.iloc[i,19] =='Perth':
# house.iloc[i,20] = 1
# house.iloc[i,21:26] = 0
# elif house.iloc[i,19] =='East Perth':
# house.iloc[i,21] = 1
# house.iloc[i,20] = 0
# house.iloc[i,22:26] = 0
# elif house.iloc[i,19] =='West Perth':
# house.iloc[i,22] = 1
# house.iloc[i,20:22] = 0
# house.iloc[i,23:26] = 0
# elif house.iloc[i,19] =='Northbridge':
# house.iloc[i,23] = 1
# house.iloc[i,20:23] = 0
# house.iloc[i,24:26] = 0
# elif house.iloc[i,19] =='Crawley':
# house.iloc[i,24] = 1
# house.iloc[i,20:24] = 0
# house.iloc[i,25] = 0
# elif house.iloc[i,19] =='Nedlands':
# house.iloc[i,25] = 1
# house.iloc[i,20:25] = 0
# if house.iloc[i,13] =='Villa':
# house.iloc[i,26] = 1
# house.iloc[i,27:29] = 0
# elif house.iloc[i,13] =='Townhouse':
# house.iloc[i,27]= 1
# house.iloc[i,26] = 0
# house.iloc[i,28] = 0
# elif house.iloc[i,13] =='House':
# house.iloc[i,28] = 1
# house.iloc[i,26:28] = 0
# house
# Assign the data to X and y
X = house.drop("price",axis=1)
y = house["price"].values.reshape(-1, 1)
print(X.shape, y.shape)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=70)
X_train.shape,X_test.shape
import seaborn as sns
plt.figure(figsize=(12,10))
cor = X_train.corr()
sns.heatmap(cor, annot = True, cmap= plt.cm.CMRmap_r)
def correlation(dataset, threshold):
col_corr = set()
corr_matrix = dataset.corr()
for i in range(len(corr_matrix.columns)):
for j in range(i):
if abs(corr_matrix.iloc[i,j]) > threshold:
colname = corr_matrix.columns[i]
col_corr.add(colname)
return col_corr
corr_features = correlation(X_train, 0.9)
len(set(corr_features))
corr_features
X_train = X_train.drop(corr_features, axis=1)
X_train.columns
X_train = X_train[['bathrooms', 'bedrooms', 'building_size',
'built_date', 'car_space', 'land_size', 'lat', 'lng', 'rent', 'rent_date', 'sold_date', 'suburb_Crawley', 'suburb_East Perth', 'suburb_Nedlands',
'suburb_Northbridge', 'suburb_Perth', 'suburb_West Perth',
'property_type_House', 'property_type_Villa']]
X_train
X_test = X_test[['bathrooms', 'bedrooms', 'building_size',
'built_date', 'car_space', 'land_size', 'lat', 'lng', 'rent', 'rent_date', 'sold_date', 'suburb_Crawley', 'suburb_East Perth', 'suburb_Nedlands',
'suburb_Northbridge', 'suburb_Perth', 'suburb_West Perth',
'property_type_House', 'property_type_Villa']]
X_test
# house.sort_values(by = ['price'], ascending=False)
# # Assign the data to X and y
# X = house[["bedrooms", "bathrooms", "car_space", "land_size", "building_size", "built_date", "postcode"]]
# y = house["price"].values.reshape(-1, 1)
# print(X.shape, y.shape)
# # Assign the data to X and y
# X = house[["bedrooms", "bathrooms", "car_space", "land_size", "building_size", "built_date", "perth", "west_perth", "east_perth", "northbridge", "crawley", "nedlands"]]
# y = house["price"].values.reshape(-1, 1)
# print(X.shape, y.shape)
# from sklearn.model_selection import train_test_split
# X_train, X_test, y_train, y_test = train_test_split(X, y)
from sklearn.preprocessing import StandardScaler
# Create a StandardScater model and fit it to the training data
X_scaler = StandardScaler().fit(X_train)
y_scaler = StandardScaler().fit(y_train)
# Transform the training and testing data using the X_scaler and y_scaler models
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
y_train_scaled = y_scaler.transform(y_train)
y_test_scaled = y_scaler.transform(y_test)
# Create the model using LinearRegression
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train_scaled, y_train_scaled)
# #evaluate the model (intercept and slope)
# print(model.intercept_)
# print(model.coef_)
# Make predictions using a fitted model
predictions = model.predict(X_test_scaled)
model.fit(X_train_scaled, y_train_scaled)
plt.scatter(model.predict(X_train_scaled), y_train_scaled - model.predict(X_train_scaled), c="blue", label="Training Data")
plt.scatter(model.predict(X_test_scaled), y_test_scaled - model.predict(X_test_scaled), c="orange", label="Testing Data")
plt.legend()
plt.hlines(y=0, xmin=y_test_scaled.min(), xmax=y_test_scaled.max())
plt.title("Residual Plot")
plt.show()
#compare actual output values with predicted values
df1 = X_test
df1['Actual'] = y_test.reshape(1,-1)[0]
df1['Linear_Regression_Predicted'] = y_scaler.inverse_transform(model.predict(X_test_scaled))
df1.head(10)
# Fit the model to the training data and calculate the scores for the training and testing data
training_score = model.score(X_train_scaled, y_train_scaled)
testing_score = model.score(X_test_scaled, y_test_scaled)
print(f"Training Score: {training_score}")
print(f"Testing Score: {testing_score}")
# Used X_test_scaled, y_test_scaled, and model.predict(X_test_scaled) to calculate MSE and R2
from sklearn.metrics import mean_squared_error
MSE = mean_squared_error(y_test_scaled, predictions)
r2 = model.score(X_test_scaled, y_test_scaled)
print(f"MSE: {MSE}, R2: {r2}")
#test
# suburb needs to be categorical
X_test = X_scaler.transform([[4,3,2,175,186,2019,0,0,0,0,1,0]])
predictions = model.predict(X_test)
results = y_scaler.inverse_transform(predictions)
results
###Output
_____no_output_____
###Markdown
LASSO model
###Code
# LASSO model
# Note: Use an alpha of .01 when creating the model for this activity
from sklearn.linear_model import Lasso
lasso = Lasso(alpha=.01).fit(X_train_scaled, y_train_scaled)
lasso_predictions = lasso.predict(X_test_scaled)
MSE = mean_squared_error(y_test_scaled, lasso_predictions) #error to a model (closer to 0 the better)
r2 = lasso.score(X_test_scaled, y_test_scaled) #nearer to 1 the better
print(f"MSE: {MSE}, R2: {r2}")
# find optimal alpha with grid search
alpha = [0.001, 0.01, 0.1, 1, 10, 100, 1000]
param_grid = dict(alpha=alpha)
lasso_grid = GridSearchCV(estimator=lasso, param_grid=param_grid, scoring='r2', verbose=1, n_jobs=-1)
lasso_grid_result = lasso_grid.fit(X_train_scaled, y_train_scaled)
# lasso_grid_predictions = lasso_grid_model(X_test_scaled)
# MSE = mean_squared_error(y_test_scaled, lasso_grid_predictions) #error to a model (closer to 0 the better)
# r2 = lasso_grid_model.score(X_test_scaled, y_test_scaled) #nearer to 1 the better
# print(f"MSE: {MSE}, R2: {r2}")
print('Best Score: ', lasso_grid_result.best_score_)
print('Best Params: ', lasso_grid_result.best_params_)
best_lasso = Lasso(alpha=0.001).fit(X_train_scaled, y_train_scaled)
best_lasso_predictions = best_lasso.predict(X_test_scaled)
MSE = mean_squared_error(y_test_scaled, best_lasso_predictions)
r2 = best_lasso.score(X_test_scaled, y_test_scaled)
print(f"MSE: {MSE}, R2: {r2}")
df1['Lasso_Predicted'] = y_scaler.inverse_transform(best_lasso.predict(X_test_scaled))
df1.head(10)
###Output
<ipython-input-50-96dcefbde04c>:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df1['Lasso_Predicted'] = y_scaler.inverse_transform(best_lasso.predict(X_test_scaled))
###Markdown
Ridge model
###Code
# Ridge model
# Note: Use an alpha of .01 when creating the model for this activity
from sklearn.linear_model import Ridge
ridge = Ridge(alpha=.01).fit(X_train_scaled, y_train_scaled)
ridge_predictions = ridge.predict(X_test_scaled)
MSE = mean_squared_error(y_test_scaled, ridge_predictions)
r2 = ridge.score(X_test_scaled, y_test_scaled)
print(f"MSE: {MSE}, R2: {r2}")
# find optimal alpha with grid search
alpha = [0.001, 0.01, 0.1, 1, 10, 100, 1000]
param_grid = dict(alpha=alpha)
ridge_grid = GridSearchCV(estimator=ridge, param_grid=param_grid, scoring='r2', verbose=1, n_jobs=-1)
ridge_grid_result = ridge_grid.fit(X_train_scaled, y_train_scaled)
# ridge_grid_predictions = ridge_grid(X_test_scaled)
# MSE = mean_squared_error(y_test_scaled, ridge_grid_predictions) #error to a model (closer to 0 the better)
# r2 = ridge_grid.score(X_test_scaled, y_test_scaled) #nearer to 1 the better
# print(f"MSE: {MSE}, R2: {r2}")
print('Best Score: ', ridge_grid_result.best_score_)
print('Best Params: ', ridge_grid_result.best_params_)
best_ridge = Ridge(alpha=10).fit(X_train_scaled, y_train_scaled)
best_ridge_predictions = best_ridge.predict(X_test_scaled)
MSE = mean_squared_error(y_test_scaled, best_ridge_predictions)
r2 = best_ridge.score(X_test_scaled, y_test_scaled)
print(f"MSE: {MSE}, R2: {r2}")
df1['Ridge_Predicted'] = y_scaler.inverse_transform(best_ridge.predict(X_test_scaled))
df1.head(10)
###Output
<ipython-input-56-4ab57f1676a3>:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df1['Ridge_Predicted'] = y_scaler.inverse_transform(best_ridge.predict(X_test_scaled))
###Markdown
ElasticNet model
###Code
# ElasticNet model
# Note: Use an alpha of .01 when creating the model for this activity
from sklearn.linear_model import ElasticNet
elasticnet = ElasticNet(alpha=.01).fit(X_train_scaled, y_train_scaled)
elas_predictions = elasticnet.predict(X_test_scaled)
MSE = mean_squared_error(y_test_scaled, elas_predictions)
r2 = elasticnet.score(X_test_scaled, y_test_scaled)
print(f"MSE: {MSE}, R2: {r2}")
# find optimal alpha with grid search
alpha = [0.001, 0.01, 0.1, 1, 10, 100, 1000]
param_grid = dict(alpha=alpha)
elasticnet_grid = GridSearchCV(estimator=elasticnet, param_grid=param_grid, scoring='r2', verbose=1, n_jobs=-1)
elasticnet_grid_result= elasticnet_grid.fit(X_train_scaled, y_train_scaled)
# elasticnet_grid_predictions = elasticnet_grid(X_test_scaled)
# MSE = mean_squared_error(y_test_scaled, elasticnet_grid_predictions) #error to a model (closer to 0 the better)
# r2 = elasticnet_grid.score(X_test_scaled, y_test_scaled) #nearer to 1 the better
# print(f"MSE: {MSE}, R2: {r2}")
print('Best Score: ', elasticnet_grid_result.best_score_)
print('Best Params: ', elasticnet_grid_result.best_params_)
best_elasticnet = ElasticNet(alpha=0.001).fit(X_train_scaled, y_train_scaled)
best_elasticnet_predictions = best_elasticnet.predict(X_test_scaled)
MSE = mean_squared_error(y_test_scaled, best_elasticnet_predictions)
r2 = best_elasticnet.score(X_test_scaled, y_test_scaled)
print(f"MSE: {MSE}, R2: {r2}")
df1['elasticnet_Predicted'] = y_scaler.inverse_transform(best_elasticnet.predict(X_test_scaled))
df1.head(10)
###Output
<ipython-input-60-30feeaf9fe4a>:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df1['elasticnet_Predicted'] = y_scaler.inverse_transform(best_elasticnet.predict(X_test_scaled))
###Markdown
Save the best Model
###Code
import joblib
joblib.dump(best_lasso, "best_model.pkl")
my_model = joblib.load("best_model.pkl")
X_test = X_scaler.transform([[4,3,2,175,186,2019,0,0,0,0,1,0]])
predictions = my_model.predict(X_test)
results = y_scaler.inverse_transform(predictions)
results
y_pred = my_model.predict(X_test_scaled)
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test_scaled, y_pred))
print('MSE:', metrics.mean_squared_error(y_test_scaled, y_pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test_scaled, y_pred)))
print('VarScore:',metrics.explained_variance_score(y_test_scaled,y_pred))
# Visualizing Our predictions
fig = plt.figure(figsize=(10,5))
plt.scatter(y_test_scaled,y_pred)
# Perfect predictions
plt.plot(y_test_scaled,y_test_scaled,'r')
###Output
MAE: 0.26162743760137125
MSE: 0.11450046069356608
RMSE: 0.33837916705016885
VarScore: 0.8591439426886013
|
Discrete+Distributions.ipynb | ###Markdown
Example: Discrete Probability Distributions
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Bernoulli Distribution$P(X) = p$
###Code
p = np.linspace(0, 1, 100)
var = p*(1-p)
plt.plot(p, var)
plt.title('variance vs p')
plt.show()
###Output
_____no_output_____
###Markdown
Binomial Distribution$p(X=k) = \binom{n}{k}p^kq^{n-k}$
###Code
from utilities import binom_pmf
# PMF
n = 10
k_range = np.arange(0, n+1)
for p in np.arange(.1, 1., .1):
out = []
for k in k_range:
out.append(binom_pmf(k, n, p))
plt.plot(k_range, out, label='p={}'.format(p.round(1)))
plt.xlabel('k')
plt.ylabel('p')
plt.legend()
plt.show()
from utilities import binom_sampling
# Sampling
for p in np.arange(.1, 1., .2):
plt.hist(binom_sampling(10, p), bins=10, label='p={}'.format(p.round(1)))
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Poisson Distribution\begin{equation}Y = \frac{\lambda^k}{k!}e^{-\lambda}\end{equation}
###Code
from utilities import poisson_pmf
X = np.linspace(0, 10, 11)
# PMF
for i, l in enumerate([1, 2, 3, 5, 7, 10]):
Y = poisson_pmf(l, X)
plt.plot(Y, label='lambda = {}'.format(l))
plt.legend()
plt.show()
from utilities import poisson_sampling
# sampling
n = 1000
for i, l in enumerate([.001, .002, .003, .005, .007, .01]):
out = poisson_sampling(n, l)
plt.hist(out, bins=20, label='lambda = {}'.format(round(l*n)))
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Geometric Distribution
###Code
from utilities import geometric_pmf, geometric_sampling
# PMF
k = np.arange(1, 100)
for p in np.arange(.1, 1., .1):
out = geometric_pmf(k, p)
plt.plot(out[:5], label='p = {}'.format(round(p, 1)))
plt.legend()
plt.xlabel('k')
plt.show()
# sampling
for p in np.arange(.1, 1., .1):
plt.hist(geometric_sampling(p), label='p={}'.format(round(p, 1)))
plt.legend()
plt.xlabel('k')
plt.show()
###Output
_____no_output_____ |
notebooks/eda/municipios_ideb.ipynb | ###Markdown
Esse notebook faz a junção de cada dataset de município com a coluna do `ideb` do dataset do ideb. O objetivo dessa junção é analisar como as variáveis de municípios impactam no ideb, que é a nossa variável resposta. Bibliotecas
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy import stats
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
###Output
_____no_output_____
###Markdown
Carrega Datasets
###Code
df_municipios_2015 = pd.read_csv('../../data/bcggammachallenge/municipios/municipios20150101.csv')
df_municipios_2016 = pd.read_csv('../../data/bcggammachallenge/municipios/municipios20160101.csv')
df_municipios_2017 = pd.read_csv('../../data/bcggammachallenge/municipios/municipios20170101.csv')
df = pd.concat([df_municipios_2015, df_municipios_2016, df_municipios_2017])
df_municipios_2017.shape
df.head()
df_ideb_ini = pd.read_csv('../../data/bcggammachallenge/ideb/ideb_municipios_anosiniciais2005_2017.csv',sep = ',',encoding='latin-1')
df_ideb_fin = pd.read_csv('../../data/bcggammachallenge/ideb/ideb_municipios_anosfinais2005_2017.csv',sep = ',',encoding='latin-1')
df_ideb_ini.shape
df_ideb_ini.columns
df_ideb_fin.columns
df_ideb_ini[['Cod_Municipio_Completo', 'Ideb2017']].head()
df_ideb_ini = df_ideb_ini.rename(columns={'Cod_Municipio_Completo': 'cod_municipio'})
df_ideb_fin = df_ideb_fin.rename(columns={'Cod_Municipio_Completo': 'cod_municipio'})
df_ideb_ini_2015 = df_ideb_ini.copy()
df_ideb_ini_2017 = df_ideb_ini.copy()
df_ideb_fin_2015 = df_ideb_fin.copy()
df_ideb_fin_2017 = df_ideb_fin.copy()
df_ideb_ini_2015 = df_ideb_ini_2015[['cod_municipio', 'Ideb2015']]
df_ideb_ini_2017 = df_ideb_ini_2017[['cod_municipio', 'Ideb2017']]
df_ideb_fin_2015 = df_ideb_fin_2015[['cod_municipio', 'Ideb2015']]
df_ideb_fin_2017 = df_ideb_fin_2017[['cod_municipio', 'Ideb2017']]
df_ideb_ini_2015.head()
df_ideb_ini_2017.head()
df_ideb_fin_2015.head()
df_ideb_fin_2017.head()
df_ideb_ini_2015['cod_municipio'] = df_ideb_ini_2015.cod_municipio.astype(float)
df_ideb_ini_2017['cod_municipio'] = df_ideb_ini_2017.cod_municipio.astype(float)
df_ideb_fin_2015['cod_municipio'] = df_ideb_fin_2015.cod_municipio.astype(float)
df_ideb_fin_2017['cod_municipio'] = df_ideb_fin_2017.cod_municipio.astype(float)
df_result_ini_2015 = pd.merge(df_municipios_2015, df_ideb_ini_2015, how='inner', on='cod_municipio')
df_result_ini_2017 = pd.merge(df_municipios_2017, df_ideb_ini_2017, how='inner', on='cod_municipio')
df_result_fin_2015 = pd.merge(df_municipios_2015, df_ideb_fin_2015, how='inner', on='cod_municipio')
df_result_fin_2017 = pd.merge(df_municipios_2017, df_ideb_fin_2017, how='inner', on='cod_municipio')
df_result_ini_2015 = df_result_ini_2015.rename(columns={'Ideb2015': 'ideb'})
df_result_ini_2017 = df_result_ini_2017.rename(columns={'Ideb2017': 'ideb'})
df_result_fin_2015 = df_result_fin_2015.rename(columns={'Ideb2015': 'ideb'})
df_result_fin_2017 = df_result_fin_2017.rename(columns={'Ideb2017': 'ideb'})
df_result_ini_2015.sort_values(by=['ideb'], ascending=False).head(8)
df_result_ini_2017.sort_values(by=['ideb'], ascending=False).head(8)
df_result_fin_2015.sort_values(by=['ideb'], ascending=False).head(8)
df_result_fin_2017.sort_values(by=['ideb'], ascending=False).head(8)
###Output
_____no_output_____
###Markdown
Limpeza do Ideb
###Code
df_result_ini_2015.drop(df_result_ini_2015[df_result_ini_2015.ideb == '-'].index, inplace=True)
df_result_ini_2017.drop(df_result_ini_2017[df_result_ini_2017.ideb == '-'].index, inplace=True)
df_result_fin_2015.drop(df_result_fin_2015[df_result_fin_2015.ideb == '-'].index, inplace=True)
df_result_fin_2017.drop(df_result_fin_2017[df_result_fin_2017.ideb == '-'].index, inplace=True)
df_result_ini_2015['ideb'] = pd.to_numeric(df_result_ini_2015['ideb'])
df_result_ini_2017['ideb'] = pd.to_numeric(df_result_ini_2017['ideb'])
df_result_fin_2015['ideb'] = pd.to_numeric(df_result_fin_2015['ideb'])
df_result_fin_2017['ideb'] = pd.to_numeric(df_result_fin_2017['ideb'])
print(df_result_ini_2015.shape)
print(df_result_fin_2015.shape)
print(df_result_ini_2017.shape)
print(df_result_fin_2017.shape)
###Output
(12048, 52)
(12158, 52)
(12508, 52)
(12751, 52)
###Markdown
Correlação linear entre todas as variáveis numéricas com o Ideb
###Code
def calculate_pearson(df):
correlations = {}
numerical_features = df.select_dtypes(exclude = ["object"]).columns
numerical_features = numerical_features.drop("cod_municipio")
for i in numerical_features:
corr = stats.pearsonr(df[i], df['ideb'])[0]
correlations[i] = corr
df_corr = pd.DataFrame(list(correlations.items()), columns=['feature', 'correlation_with_ideb'])
df_corr = df_corr.dropna()
return df_corr
df_corr_ini_2015 = calculate_pearson(df_result_ini_2015)
df_corr_ini_2017 = calculate_pearson(df_result_ini_2017)
df_corr_fin_2015 = calculate_pearson(df_result_fin_2015)
df_corr_fin_2017 = calculate_pearson(df_result_fin_2017)
df_corr_ini_2015.sort_values(by=['correlation_with_ideb'], ascending=False)
df_corr_ini_2017.sort_values(by=['correlation_with_ideb'], ascending=False)
df_corr_fin_2015.sort_values(by=['correlation_with_ideb'], ascending=False)
df_corr_fin_2017.sort_values(by=['correlation_with_ideb'], ascending=False)
###Output
_____no_output_____
###Markdown
Separa anos iniciais de anos finais
###Code
df_result_ini_2015.filter(like='medio').columns
df_result_ini_2015 = df_result_ini_2015.drop(df_result_ini_2015.filter(like='medio').columns, axis=1)
df_result_ini_2017 = df_result_ini_2017.drop(df_result_ini_2017.filter(like='medio').columns, axis=1)
df_result_fin_2015.filter(like='fund').columns
df_result_fin_2015 = df_result_fin_2015.drop(df_result_fin_2015.filter(like='fund').columns, axis=1)
df_result_fin_2017 = df_result_fin_2017.drop(df_result_fin_2017.filter(like='fund').columns, axis=1)
df_result_fin_2015 = df_result_fin_2015.drop(['num_estudantes_ensino_infantil'], axis=1)
df_result_fin_2017 = df_result_fin_2017.drop(['num_estudantes_ensino_infantil'], axis=1)
print(df_result_ini_2015.shape)
print(df_result_ini_2017.shape)
print(df_result_fin_2015.shape)
print(df_result_fin_2017.shape)
###Output
(12048, 46)
(12508, 46)
(12158, 40)
(12751, 40)
###Markdown
Salvar
###Code
df_result_ini_2015.to_csv('../../data/bases_ale/ideb_municipios_2015_ai.csv')
df_result_ini_2017.to_csv('../../data/bases_ale/ideb_municipios_2017_ai.csv')
df_result_fin_2015.to_csv('../../data/bases_ale/ideb_municipios_2015_af.csv')
df_result_fin_2017.to_csv('../../data/bases_ale/ideb_municipios_2017_af.csv')
###Output
_____no_output_____ |
Code/credit_risk_resampling_ggc.ipynb | ###Markdown
Credit Risk Resampling Techniques
###Code
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
from sklearn.preprocessing import LabelEncoder, StandardScaler
###Output
_____no_output_____
###Markdown
Read the CSV and Perform Basic Data Cleaning
###Code
columns = [
"loan_amnt", "int_rate", "installment", "home_ownership",
"annual_inc", "verification_status", "issue_d", "loan_status",
"pymnt_plan", "dti", "delinq_2yrs", "inq_last_6mths",
"open_acc", "pub_rec", "revol_bal", "total_acc",
"initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt",
"total_pymnt_inv", "total_rec_prncp", "total_rec_int", "total_rec_late_fee",
"recoveries", "collection_recovery_fee", "last_pymnt_amnt", "next_pymnt_d",
"collections_12_mths_ex_med", "policy_code", "application_type", "acc_now_delinq",
"tot_coll_amt", "tot_cur_bal", "open_acc_6m", "open_act_il",
"open_il_12m", "open_il_24m", "mths_since_rcnt_il", "total_bal_il",
"il_util", "open_rv_12m", "open_rv_24m", "max_bal_bc",
"all_util", "total_rev_hi_lim", "inq_fi", "total_cu_tl",
"inq_last_12m", "acc_open_past_24mths", "avg_cur_bal", "bc_open_to_buy",
"bc_util", "chargeoff_within_12_mths", "delinq_amnt", "mo_sin_old_il_acct",
"mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op", "mo_sin_rcnt_tl", "mort_acc",
"mths_since_recent_bc", "mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0",
"num_sats", "num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75", "pub_rec_bankruptcies",
"tax_liens", "tot_hi_cred_lim", "total_bal_ex_mort", "total_bc_limit",
"total_il_high_credit_limit", "hardship_flag", "debt_settlement_flag"
]
target = ["loan_status"]
# Load the data
file_path = Path('../Resources/LoanStats_2019Q1.csv.zip')
df = pd.read_csv(file_path, skiprows=1)[:-2]
df = df.loc[:, columns].copy()
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
# Remove the `Issued` loan status
issued_mask = df['loan_status'] != 'Issued'
df = df.loc[issued_mask]
# convert interest rate to numerical
df['int_rate'] = df['int_rate'].str.replace('%', '')
df['int_rate'] = df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = {'Current': 'low_risk'}
df = df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk')
df = df.replace(x)
df.reset_index(inplace=True, drop=True)
df.head()
###Output
_____no_output_____
###Markdown
Split the Data into Training and Testing
###Code
#list(df)
# Create our features
x_cols = [i for i in df if i not in ('loan_status')]
X = df[x_cols]
# Create our target
y = df['loan_status']
X.head()
# Creating an instance of label encoder
#label_encoder = LabelEncoder()
# Fitting the label encoder
##label_encoder.fit(X['home_ownership'])
#print(list(label_encoder.classes_))
#X['home_ownership'] = label_encoder.transform(X['home_ownership'])
#label_encoder.fit(X['verification_status'])
#print(list(label_encoder.classes_))
#X['verification_status'] = label_encoder.transform(X['verification_status'])
#label_encoder.fit(X['pymnt_plan'])
#print(list(label_encoder.classes_))
#X['pymnt_plan'] = label_encoder.transform(X['pymnt_plan'])
#label_encoder.fit(X['initial_list_status'])
#print(list(label_encoder.classes_))
#X['initial_list_status'] = label_encoder.transform(X['initial_list_status'])
#label_encoder.fit(X['next_pymnt_d'])
#print(list(label_encoder.classes_))
#X['next_pymnt_d'] = label_encoder.transform(X['next_pymnt_d'])
#label_encoder.fit(X['application_type'])
#print(list(label_encoder.classes_))
#X['application_type'] = label_encoder.transform(X['application_type'])
#label_encoder.fit(X['issue_d'])
#print(list(label_encoder.classes_))
#X['issue_d'] = label_encoder.transform(X['issue_d'])
#X.drop(['issue_d'], axis=1,inplace=True)
X = pd.get_dummies(X, columns=["home_ownership", "issue_d", "verification_status", "pymnt_plan", "initial_list_status", "application_type", "next_pymnt_d", "hardship_flag", "debt_settlement_flag"])
X.head()
y.head()
y.value_counts()
Counter(y)
# Create X_train, X_test, y_train, y_test
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
###Output
_____no_output_____
###Markdown
Data Pre-ProcessingScale the training and testing data using the `StandardScaler` from `sklearn`. Remember that when scaling the data, you only scale the features data (`X_train` and `X_testing`).
###Code
# Create the StandardScaler instance
scaler = StandardScaler()
# Fit the Standard Scaler with the training data
# When fitting scaling functions, only train on the training dataset
X_scaler = scaler.fit(X_train)
# Scale the training and testing data
# Scaling data --> we transfrom both the training and the test data, X_scaler has been trainig with the training data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
OversamplingIn this section, you will compare two oversampling algorithms to determine which algorithm results in the best performance. You will oversample the data using the naive random oversampling algorithm and the SMOTE algorithm. For each algorithm, be sure to complete the folliowing steps:1. View the count of the target classes using `Counter` from the collections library. 3. Use the resampled data to train a logistic regression model.3. Calculate the balanced accuracy score from sklearn.metrics.4. Print the confusion matrix from sklearn.metrics.5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests Naive Random Oversampling
###Code
# Initial state
Counter(y_train)
# implement random oversampling
from imblearn.over_sampling import RandomOverSampler
ros = RandomOverSampler(random_state=1)
X_resampled, y_resampled = ros.fit_resample(X_train, y_train)
Counter(y_resampled)
# observe know that the number of samples are equal in the two classes of points
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
from sklearn.metrics import balanced_accuracy_score
y_pred = model.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
###Output
pre rec spe f1 geo iba sup
high_risk 0.01 0.69 0.63 0.02 0.66 0.44 101
low_risk 1.00 0.63 0.69 0.77 0.66 0.43 17104
avg / total 0.99 0.63 0.69 0.77 0.66 0.43 17205
###Markdown
SMOTE Oversampling
###Code
# Initial state
Counter(y_train)
# Fit the SMOTE model to the data and check the count of each class
from imblearn.over_sampling import SMOTE
X_resampled, y_resampled = SMOTE(random_state=1, sampling_strategy=1.0).fit_resample(
X_train, y_train
)
from collections import Counter
Counter(y_resampled)
# Fit a logistic regression model using the SMOTE resampled data
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Calculate the balanced accuracy score
y_pred = model.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred))
###Output
pre rec spe f1 geo iba sup
high_risk 0.01 0.61 0.65 0.02 0.63 0.40 101
low_risk 1.00 0.65 0.61 0.78 0.63 0.40 17104
avg / total 0.99 0.65 0.61 0.78 0.63 0.40 17205
###Markdown
UndersamplingIn this section, you will test an undersampling algorithms to determine which algorithm results in the best performance compared to the oversampling algorithms above. You will undersample the data using the Cluster Centroids algorithm and complete the folliowing steps:1. View the count of the target classes using `Counter` from the collections library. 3. Use the resampled data to train a logistic regression model.3. Calculate the balanced accuracy score from sklearn.metrics.4. Print the confusion matrix from sklearn.metrics.5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
###Code
# Initial state
Counter(y_train)
# Resample the data using the ClusterCentroids resampler
# Fit the data using `ClusterCentroids` and check the count of each class
from imblearn.under_sampling import ClusterCentroids
cc = ClusterCentroids(random_state=1)
X_resampled, y_resampled = cc.fit_resample(X_train, y_train)
from collections import Counter
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
# Logistic regression using cluster centroid undersampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Calculate the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
y_pred = model.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
y_pred = model.predict(X_test)
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
###Output
pre rec spe f1 geo iba sup
high_risk 0.01 0.65 0.40 0.01 0.51 0.27 101
low_risk 0.99 0.40 0.65 0.57 0.51 0.25 17104
avg / total 0.99 0.40 0.65 0.57 0.51 0.25 17205
###Markdown
Combination (Over and Under) SamplingIn this section, you will test a combination over- and under-sampling algorithm to determine if the algorithm results in the best performance compared to the other sampling algorithms above. You will resample the data using the SMOTEENN algorithm and complete the folliowing steps:1. View the count of the target classes using `Counter` from the collections library. 3. Use the resampled data to train a logistic regression model.3. Calculate the balanced accuracy score from sklearn.metrics.4. Print the confusion matrix from sklearn.metrics.5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
###Code
# Initial state
Counter(y_train)
#SMOTEENN combination sampling
from imblearn.combine import SMOTEENN
sm = SMOTEENN(random_state=1)
X_resampled, y_resampled = sm.fit_resample(X_train, y_train)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
# Logistic regression using random combination sampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Calculate the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
y_pred = model.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
y_pred = model.predict(X_test)
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
###Output
pre rec spe f1 geo iba sup
high_risk 0.01 0.70 0.58 0.02 0.64 0.41 101
low_risk 1.00 0.58 0.70 0.73 0.64 0.40 17104
avg / total 0.99 0.58 0.70 0.73 0.64 0.40 17205
|
src/A5-Extension.ipynb | ###Markdown
A5: Extension Analysis Motivation/problem statementThe syndemic of COVID-19 and deaths from drug overdose in the US continues to evolve. Even though the number of COVID-19 deaths cases and deaths have trended downward since the peak in December 2020, this might not hold for drug abuse and overdose. Macroenvironmental changes that began during the COVID-19 pandemic, such as increased physical isolation, mental health stressors and economic insecurity, persist and may be associated with the continuing drug overdose cases.The main question that will be explored for this project is as follows:How did the COVID-19 pandemic affect drug overdose cases between February 1, 2020 through October 15, 2021, in Philadelphia, Pennsylvania. One of the main reasons I chose to analyze drug use/overdose related data is because it has a spreading impact on the people around the person that overdoses and suffers. According to the CDC, the total "economic burden" of opioid misuse alone in the United States is $78.5 billion a year, including the costs of healthcare, lost productivity, addiction treatment, and criminal justice involvement. This impact is huge and affects everyone in the economy. The sheer economic and financial impact is what makes this problem and analysis interesting. Research questions and/or hypothesesBased on some preliminary research and analysis of the available dataset, I have formed the following hypotheses:1. The number of drug overdoses cases during the COVID-19 pandemic is significantly higher than before the start of the pandemic2. The age does not affect the dosage of Naloxone administered3. The drug type does not affect the dosage of Naloxone administeredFor the above mentioned hypotheses we make the following assumptions:* We define the pandemic period to be between: February 1, 2020 - October 15, 2021* We only analyze data from Philadelphia, Pennsylvania
###Code
import os
import datetime
import pandas as pd
pd.set_option('display.max_columns', None)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.ticker import (FuncFormatter)
import scipy.stats as stats
from pprint import pprint as pp
from tqdm import tqdm
RAW_DATA_PATH = '../data/raw'
PROCESSED_DATA_PATH = '../data/processed'
ERROR_DATA_PATH = '../data/errors'
for path in [RAW_DATA_PATH, PROCESSED_DATA_PATH, ERROR_DATA_PATH]:
if not os.path.exists(path):
os.makedirs(path)
DRUG_OVERDOSE_FILE = os.path.join(PROCESSED_DATA_PATH, 'Pennsylvania-Drug-Overdose.csv')
COVID_DATA_FILE = os.path.join(PROCESSED_DATA_PATH, 'Philadelphia-Pennsylvania-Covid-Data.csv')
PHILLY_POPULATION_DATA_FILE = os.path.join(PROCESSED_DATA_PATH, 'Philadelphia-Ages.csv')
# We define the start of the pandemic to be: February 1, 2020
COVID_START_PERIOD = datetime.date(2020, 2, 1)
###Output
_____no_output_____
###Markdown
Parsing Overdose Cases FileThis data has been obtained from [OpenDATAPA](https://data.pa.gov/Opioid-Related/Overdose-Information-Network-Data-CY-January-2018-/hbkk-dwy3)This dataset contains summary information on overdose responses and naloxone administrations by Pennsylvania criminal justice agencies and some third-party (i.e. EMS, Fire, Medical Staff, etc) first responders voluntarily entering incident data. Due to the voluntary nature of the application, the ODIN information provided may not represent the totality of all overdose and/or naloxone administration incidents involving criminal justice agencies occurring within the Commonwealth. Although this dataset does include some third-party administrations of naloxone, it should not be used to measure overdose response and naloxone administration incidents among all first responders.
###Code
overdose_df = pd.read_csv(
os.path.join(DRUG_OVERDOSE_FILE),
low_memory=False,
header=[0]
)
# Parse the date col as the correct type
overdose_df['Incident Date'] = pd.to_datetime(overdose_df['Incident Date']).dt.date
overdose_df['Incident Year'] = pd.to_datetime(overdose_df['Incident Date']).dt.year
overdose_df['Incident Month'] = pd.to_datetime(overdose_df['Incident Date']).dt.month
overdose_pen_df = overdose_df.loc[(overdose_df['Incident County Name'] == 'Philadelphia')]
overdose_pen_df['Overdose Case'] = 1
overdose_pen_df['Total Dosage(mg)'] = overdose_pen_df['Dose Count'] * overdose_pen_df['Dose Unit']
overdose_pen_df.head()
overdose_pen_precovid_df = overdose_pen_df.loc[(overdose_pen_df['Incident Date'] < COVID_START_PERIOD)]
overdose_pen_precovid_monthly_df = overdose_pen_precovid_df.groupby(
['Incident Year', 'Incident Month']
).agg({'Overdose Case':'sum'}).reset_index()
overdose_pen_duringcovid_df = overdose_pen_df.loc[
(overdose_pen_df['Incident Date'] >= COVID_START_PERIOD)
]
overdose_pen_duringcovid_monthly_df = overdose_pen_duringcovid_df.groupby(
['Incident Year', 'Incident Month']
).agg({'Overdose Case':'sum'}).reset_index()
###Output
_____no_output_____
###Markdown
Parsing COVID Cases FileThis file was produced in `A4-Common-Analysis.ipynb` - Check this notebook for details
###Code
cases_df = pd.read_csv(
COVID_DATA_FILE,
low_memory=False,
header=[0]
)
# Parse the date col as the correct type
cases_df['Date'] = pd.to_datetime(cases_df['Date']).dt.date
cases_df['Year'] = pd.to_datetime(cases_df['Date']).dt.year
cases_df['Month'] = pd.to_datetime(cases_df['Date']).dt.month
cases_df.head()
###Output
_____no_output_____
###Markdown
Research Question 1We first attempt answer the following question:**The number of drug overdoses cases during the COVID-19 pandemic is significantly higher than before the start of the pandemic**This question will be answered using the following methodology:1. Create a visualization of the time-series data2. Run a t-test to check hypothesis
###Code
case_counts = pd.merge(overdose_pen_df, cases_df, how='right', left_on='Incident Date', right_on='Date')
case_counts.sort_values(by='Incident Date', inplace=True)
case_counts
# Replace the categorical value with the numeric value
case_counts['Survive'].replace(
{
'Y': 1,
'U': 0,
'N': 0,
np.nan: 0
},
inplace=True
)
case_counts_date = case_counts.groupby(['Year', 'Month']).agg({'Cases': 'sum', 'Overdose Case': 'sum'}).reset_index()
case_counts_date['Day'] = 1
case_counts_date['Date'] = pd.to_datetime(case_counts_date[['Year', 'Month', 'Day']])
case_counts_date
plt.figure(figsize=(20,12))
# plt.style.use('seaborn-darkgrid')
plt.style.use('default')
ax = plt.gca()
ax2 = ax.twinx()
# Create lines and choose characteristics
ax.bar('Date', 'Cases', data=case_counts_date, width=8)
ax2.plot('Date', 'Overdose Case', 'g', data=case_counts_date, label='Overdose Cases')
# format the x-ticks
years = mdates.YearLocator() # every year
months = mdates.MonthLocator() # every month
yearsFmt = mdates.DateFormatter('%Y-%m')
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(yearsFmt)
# # Add title and axis names
plt.title(f'Number of COVID and Overdose Cases', fontsize=18)
plt.xlabel('Date', fontsize=14)
ax.set_ylabel('Monthly COVID Cases', fontsize=14)
ax2.set_ylabel('Monthly Druge Overdose Cases', fontsize=14)
# ax.grid()
ax2.legend()
plt.figure(figsize=(20,10))
ax = plt.gca()
# ax2 = ax.twinx()
# Create lines and choose characteristics
# ax.bar('Date', 'Cases', data=case_counts_date, width=8)
ax.plot('Date', 'Overdose Case', '#0040C0', data=case_counts_date, label='Overdose Cases')
ax.axhline(y=overdose_pen_precovid_monthly_df['Overdose Case'].mean(), color='g', linestyle='dashdot', label='Pre-COVID Mean Monthly Overdose Cases')
ax.axhline(y=overdose_pen_duringcovid_monthly_df['Overdose Case'].mean(), color='r', linestyle='dashdot', label='During-COVID Mean Monthly Overdose Cases')
# format the x-ticks
years = mdates.YearLocator() # every year
months = mdates.MonthLocator() # every month
yearsFmt = mdates.DateFormatter('%Y-%m')
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(yearsFmt)
# ax.xaxis.set_minor_locator(months)
# # Add title and axis names
plt.title(f'Drug Overdose Cases', fontsize=18)
plt.xlabel('Date', fontsize=14)
# ax.set_ylabel('Monthly COVID Cases', fontsize=14)
ax.set_ylabel('Monthly Druge Overdose Cases', fontsize=14)
#ax.grid()
ax.legend()
pre_coivd_overdose_cases = list(overdose_pen_precovid_monthly_df['Overdose Case'])
during_coivd_overdose_cases = list(overdose_pen_duringcovid_monthly_df['Overdose Case'])
print(pre_coivd_overdose_cases, during_coivd_overdose_cases)
tvalue, pvalue = stats.ttest_ind(pre_coivd_overdose_cases, during_coivd_overdose_cases)
print(f'The fvalue is: {tvalue:.4} and the p value is: {pvalue:.4}')
###Output
The fvalue is: 1.922 and the p value is: 0.06112
###Markdown
Research Question 2We first attempt answer the following question:**The age does not affect the dosage of Naloxone administered**This question will be answered using the following methodology:1. We will pivot the table to make the age range the columns and the rows will be the dosage of Naloxone administered for each case that has been reported. 2. We then run an ANNOVA test with the age ranges with the following hypotheses: * H_0: There is no difference between Naloxone dosage administered between the ages * H_1: There is a difference between Naloxone dosage administered between the ages
###Code
case_counts.head()
drug_overdose_pop_df = pd.merge(case_counts.groupby('Age Range').agg({'Overdose Case': 'sum',
'Survive': 'sum',
'Total Dosage(mg)': 'mean'
}).reset_index(),
pd.read_csv(PHILLY_POPULATION_DATA_FILE),
how='inner',
on='Age Range')
drug_overdose_pop_df['Survival Rate'] = drug_overdose_pop_df['Survive'] / drug_overdose_pop_df['Overdose Case']
drug_overdose_pop_df
overdose_dosage_pivot_df = case_counts.pivot(columns='Age Range', values='Total Dosage(mg)')
overdose_dosage_pivot_df.dropna(how='all', inplace=True)
columns = [
list(overdose_dosage_pivot_df['25 - 29'].dropna()),
list(overdose_dosage_pivot_df['30 - 39'].dropna()),
list(overdose_dosage_pivot_df['40 - 49'].dropna()),
list(overdose_dosage_pivot_df['50 - 59'].dropna()),
list(overdose_dosage_pivot_df['60 - 69'].dropna())
]
plt.figure(figsize=(15,7))
ax = plt.gca()
ax.boxplot(columns, vert=0)
plt.yticks([1, 2, 3, 4, 5], ['25 - 29', '30 - 39', '40 - 49', '50 - 59', '60 - 69'])
plt.title(f'Distribution of Naloxone Dosage (mg) Administered per Age Range', fontsize=18)
plt.xlabel('Naloxone Dosage Administered (mg)', fontsize=14)
ax.set_ylabel('Age Range', fontsize=14)
plt.show()
fvalue, pvalue = stats.f_oneway(
list(overdose_dosage_pivot_df['25 - 29'].dropna()),
list(overdose_dosage_pivot_df['30 - 39'].dropna()),
list(overdose_dosage_pivot_df['40 - 49'].dropna()),
list(overdose_dosage_pivot_df['50 - 59'].dropna()),
list(overdose_dosage_pivot_df['60 - 69'].dropna())
)
print(f'The fvalue is: {fvalue:.4} and the p value is: {pvalue:.4}')
###Output
The fvalue is: 0.5671 and the p value is: 0.6871
###Markdown
Research Question 3We first attempt answer the following question:**The drug type does not affect the dosage of Naloxone administered**This question will be answered using the following methodology:1. We will pivot the table to make the drug type the columns and the rows will be the dosage of Naloxone administered for each case that has been reported. 2. We then run an ANNOVA test with the age ranges with the following hypotheses: * H_0: There is no difference between Naloxone dosage administered between the drug type * H_1: There is a difference between Naloxone dosage administered between the drug type
###Code
drug_overdose_desc_df = case_counts.groupby('Susp OD Drug Desc').agg({'Overdose Case': 'sum'}).reset_index().sort_values(by='Overdose Case', ascending=False)
drug_overdose_desc_df.head(10)
drug_overdose_desc_df = drug_overdose_desc_df[drug_overdose_desc_df['Susp OD Drug Desc'] != 0]
drug_overdose_desc_df['Susp OD Drug Desc'].unique()
drug_overdose_desc_df['Susp OD Drug Desc'].replace(
{
'FENTANYL ANALOG/OTHER SYNTHETIC OPIOID': 'SYNTHETIC OPIOID',
'SYNTHETIC MARIJUANA': 'MARIJUANA',
'BARBITURATES (I.E. AMYTAL, NEMBUTAL, ETC)': 'BARBITURATES',
'BENZODIAZEPINES (I.E.VALIUM, XANAX, ATIVAN, ETC)': 'BENZODIAZEPINES'
}, inplace=True
)
plt.figure(figsize=(20,10))
ax = plt.gca()
# Create lines and choose characteristics
ax.bar('Susp OD Drug Desc', 'Overdose Case', data=drug_overdose_desc_df, width=0.5)
# Add title and axis names
plt.title(f'Number of Overdose Cases by Drug Type', fontsize=18)
plt.xlabel('Drug Type', fontsize=14)
plt.xticks(rotation = 30)
ax.set_ylabel('Cumulative Number of Cases', fontsize=14)
overdose_dosage_pivot_drug_df = case_counts.pivot(columns='Susp OD Drug Desc', values='Total Dosage(mg)')
overdose_dosage_pivot_drug_df.dropna(how='all', inplace=True)
overdose_dosage_pivot_drug_df.head()
columns = [
list(overdose_dosage_pivot_drug_df['HEROIN'].dropna()),
list(overdose_dosage_pivot_drug_df['FENTANYL'].dropna()),
list(overdose_dosage_pivot_drug_df['ALCOHOL'].dropna()),
list(overdose_dosage_pivot_drug_df['COCAINE/CRACK'].dropna()),
]
plt.figure(figsize=(15,7))
ax = plt.gca()
ax.boxplot(columns, vert=0)
plt.yticks([1, 2, 3, 4], ['HEROIN', 'FENTANYL', 'ALCOHOL', 'COCAINE/CRACK'])
plt.title(f'Distribution of Naloxone Dosage (mg) Administered per Drug Type', fontsize=18)
plt.xlabel('Naloxone Dosage Administered (mg)', fontsize=14)
ax.set_ylabel('Drug Type', fontsize=14)
plt.show()
fvalue, pvalue = stats.f_oneway(
list(overdose_dosage_pivot_drug_df['HEROIN'].dropna()),
list(overdose_dosage_pivot_drug_df['FENTANYL'].dropna()),
list(overdose_dosage_pivot_drug_df['ALCOHOL'].dropna()),
list(overdose_dosage_pivot_drug_df['COCAINE/CRACK'].dropna()),
)
print(f'The fvalue is: {fvalue:.4} and the p value is: {pvalue:.4}')
###Output
The fvalue is: 0.8348 and the p value is: 0.4795
|
_sequential/Deep Learning Sequential/Week 3/Triggerword Detection/Trigger+word+detection+-+v1.ipynb | ###Markdown
Trigger Word DetectionWelcome to the final programming assignment of this specialization! In this week's videos, you learned about applying deep learning to speech recognition. In this assignment, you will construct a speech dataset and implement an algorithm for trigger word detection (sometimes also called keyword detection, or wakeword detection). Trigger word detection is the technology that allows devices like Amazon Alexa, Google Home, Apple Siri, and Baidu DuerOS to wake up upon hearing a certain word. For this exercise, our trigger word will be "Activate." Every time it hears you say "activate," it will make a "chiming" sound. By the end of this assignment, you will be able to record a clip of yourself talking, and have the algorithm trigger a chime when it detects you saying "activate." After completing this assignment, perhaps you can also extend it to run on your laptop so that every time you say "activate" it starts up your favorite app, or turns on a network connected lamp in your house, or triggers some other event? In this assignment you will learn to: - Structure a speech recognition project- Synthesize and process audio recordings to create train/dev datasets- Train a trigger word detection model and make predictionsLets get started! Run the following cell to load the package you are going to use.
###Code
!pip install pydub
import numpy as np
from pydub import AudioSegment
import random
import sys
import io
import os
import glob
import IPython
from td_utils import *
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 - Data synthesis: Creating a speech dataset Let's start by building a dataset for your trigger word detection algorithm. A speech dataset should ideally be as close as possible to the application you will want to run it on. In this case, you'd like to detect the word "activate" in working environments (library, home, offices, open-spaces ...). You thus need to create recordings with a mix of positive words ("activate") and negative words (random words other than activate) on different background sounds. Let's see how you can create such a dataset. 1.1 - Listening to the data One of your friends is helping you out on this project, and they've gone to libraries, cafes, restaurants, homes and offices all around the region to record background noises, as well as snippets of audio of people saying positive/negative words. This dataset includes people speaking in a variety of accents. In the raw_data directory, you can find a subset of the raw audio files of the positive words, negative words, and background noise. You will use these audio files to synthesize a dataset to train the model. The "activate" directory contains positive examples of people saying the word "activate". The "negatives" directory contains negative examples of people saying random words other than "activate". There is one word per audio recording. The "backgrounds" directory contains 10 second clips of background noise in different environments.Run the cells below to listen to some examples.
###Code
IPython.display.Audio("./raw_data/activates/1.wav")
IPython.display.Audio("./raw_data/negatives/4.wav")
IPython.display.Audio("./raw_data/backgrounds/1.wav")
###Output
_____no_output_____
###Markdown
You will use these three type of recordings (positives/negatives/backgrounds) to create a labelled dataset. 1.2 - From audio recordings to spectrogramsWhat really is an audio recording? A microphone records little variations in air pressure over time, and it is these little variations in air pressure that your ear also perceives as sound. You can think of an audio recording is a long list of numbers measuring the little air pressure changes detected by the microphone. We will use audio sampled at 44100 Hz (or 44100 Hertz). This means the microphone gives us 44100 numbers per second. Thus, a 10 second audio clip is represented by 441000 numbers (= $10 \times 44100$). It is quite difficult to figure out from this "raw" representation of audio whether the word "activate" was said. In order to help your sequence model more easily learn to detect triggerwords, we will compute a *spectrogram* of the audio. The spectrogram tells us how much different frequencies are present in an audio clip at a moment in time. (If you've ever taken an advanced class on signal processing or on Fourier transforms, a spectrogram is computed by sliding a window over the raw audio signal, and calculates the most active frequencies in each window using a Fourier transform. If you don't understand the previous sentence, don't worry about it.) Lets see an example.
###Code
IPython.display.Audio("audio_examples/example_train.wav")
x = graph_spectrogram("audio_examples/example_train.wav")
###Output
_____no_output_____
###Markdown
The graph above represents how active each frequency is (y axis) over a number of time-steps (x axis). **Figure 1**: Spectrogram of an audio recording, where the color shows the degree to which different frequencies are present (loud) in the audio at different points in time. Green squares means a certain frequency is more active or more present in the audio clip (louder); blue squares denote less active frequencies. The dimension of the output spectrogram depends upon the hyperparameters of the spectrogram software and the length of the input. In this notebook, we will be working with 10 second audio clips as the "standard length" for our training examples. The number of timesteps of the spectrogram will be 5511. You'll see later that the spectrogram will be the input $x$ into the network, and so $T_x = 5511$.
###Code
_, data = wavfile.read("audio_examples/example_train.wav")
print("Time steps in audio recording before spectrogram", data[:,0].shape)
print("Time steps in input after spectrogram", x.shape)
###Output
_____no_output_____
###Markdown
Now, you can define:
###Code
Tx = 5511 # The number of time steps input to the model from the spectrogram
n_freq = 101 # Number of frequencies input to the model at each time step of the spectrogram
###Output
_____no_output_____
###Markdown
Note that even with 10 seconds being our default training example length, 10 seconds of time can be discretized to different numbers of value. You've seen 441000 (raw audio) and 5511 (spectrogram). In the former case, each step represents $10/441000 \approx 0.000023$ seconds. In the second case, each step represents $10/5511 \approx 0.0018$ seconds. For the 10sec of audio, the key values you will see in this assignment are:- $441000$ (raw audio)- $5511 = T_x$ (spectrogram output, and dimension of input to the neural network). - $10000$ (used by the `pydub` module to synthesize audio) - $1375 = T_y$ (the number of steps in the output of the GRU you'll build). Note that each of these representations correspond to exactly 10 seconds of time. It's just that they are discretizing them to different degrees. All of these are hyperparameters and can be changed (except the 441000, which is a function of the microphone). We have chosen values that are within the standard ranges uses for speech systems. Consider the $T_y = 1375$ number above. This means that for the output of the model, we discretize the 10s into 1375 time-intervals (each one of length $10/1375 \approx 0.0072$s) and try to predict for each of these intervals whether someone recently finished saying "activate." Consider also the 10000 number above. This corresponds to discretizing the 10sec clip into 10/10000 = 0.001 second itervals. 0.001 seconds is also called 1 millisecond, or 1ms. So when we say we are discretizing according to 1ms intervals, it means we are using 10,000 steps.
###Code
Ty = 1375 # The number of time steps in the output of our model
###Output
_____no_output_____
###Markdown
1.3 - Generating a single training exampleBecause speech data is hard to acquire and label, you will synthesize your training data using the audio clips of activates, negatives, and backgrounds. It is quite slow to record lots of 10 second audio clips with random "activates" in it. Instead, it is easier to record lots of positives and negative words, and record background noise separately (or download background noise from free online sources). To synthesize a single training example, you will:- Pick a random 10 second background audio clip- Randomly insert 0-4 audio clips of "activate" into this 10sec clip- Randomly insert 0-2 audio clips of negative words into this 10sec clipBecause you had synthesized the word "activate" into the background clip, you know exactly when in the 10sec clip the "activate" makes its appearance. You'll see later that this makes it easier to generate the labels $y^{\langle t \rangle}$ as well. You will use the pydub package to manipulate audio. Pydub converts raw audio files into lists of Pydub data structures (it is not important to know the details here). Pydub uses 1ms as the discretization interval (1ms is 1 millisecond = 1/1000 seconds) which is why a 10sec clip is always represented using 10,000 steps.
###Code
# Load audio segments using pydub
activates, negatives, backgrounds = load_raw_audio()
print("background len: " + str(len(backgrounds[0]))) # Should be 10,000, since it is a 10 sec clip
print("activate[0] len: " + str(len(activates[0]))) # Maybe around 1000, since an "activate" audio clip is usually around 1 sec (but varies a lot)
print("activate[1] len: " + str(len(activates[1]))) # Different "activate" clips can have different lengths
###Output
_____no_output_____
###Markdown
**Overlaying positive/negative words on the background**:Given a 10sec background clip and a short audio clip (positive or negative word), you need to be able to "add" or "insert" the word's short audio clip onto the background. To ensure audio segments inserted onto the background do not overlap, you will keep track of the times of previously inserted audio clips. You will be inserting multiple clips of positive/negative words onto the background, and you don't want to insert an "activate" or a random word somewhere that overlaps with another clip you had previously added. For clarity, when you insert a 1sec "activate" onto a 10sec clip of cafe noise, you end up with a 10sec clip that sounds like someone sayng "activate" in a cafe, with "activate" superimposed on the background cafe noise. You do *not* end up with an 11 sec clip. You'll see later how pydub allows you to do this. **Creating the labels at the same time you overlay**:Recall also that the labels $y^{\langle t \rangle}$ represent whether or not someone has just finished saying "activate." Given a background clip, we can initialize $y^{\langle t \rangle}=0$ for all $t$, since the clip doesn't contain any "activates." When you insert or overlay an "activate" clip, you will also update labels for $y^{\langle t \rangle}$, so that 50 steps of the output now have target label 1. You will train a GRU to detect when someone has *finished* saying "activate". For example, suppose the synthesized "activate" clip ends at the 5sec mark in the 10sec audio---exactly halfway into the clip. Recall that $T_y = 1375$, so timestep $687 = $ `int(1375*0.5)` corresponds to the moment at 5sec into the audio. So, you will set $y^{\langle 688 \rangle} = 1$. Further, you would quite satisfied if the GRU detects "activate" anywhere within a short time-internal after this moment, so we actually set 50 consecutive values of the label $y^{\langle t \rangle}$ to 1. Specifically, we have $y^{\langle 688 \rangle} = y^{\langle 689 \rangle} = \cdots = y^{\langle 737 \rangle} = 1$. This is another reason for synthesizing the training data: It's relatively straightforward to generate these labels $y^{\langle t \rangle}$ as described above. In contrast, if you have 10sec of audio recorded on a microphone, it's quite time consuming for a person to listen to it and mark manually exactly when "activate" finished. Here's a figure illustrating the labels $y^{\langle t \rangle}$, for a clip which we have inserted "activate", "innocent", activate", "baby." Note that the positive labels "1" are associated only with the positive words. **Figure 2** To implement the training set synthesis process, you will use the following helper functions. All of these function will use a 1ms discretization interval, so the 10sec of audio is alwsys discretized into 10,000 steps. 1. `get_random_time_segment(segment_ms)` gets a random time segment in our background audio2. `is_overlapping(segment_time, existing_segments)` checks if a time segment overlaps with existing segments3. `insert_audio_clip(background, audio_clip, existing_times)` inserts an audio segment at a random time in our background audio using `get_random_time_segment` and `is_overlapping`4. `insert_ones(y, segment_end_ms)` inserts 1's into our label vector y after the word "activate" The function `get_random_time_segment(segment_ms)` returns a random time segment onto which we can insert an audio clip of duration `segment_ms`. Read through the code to make sure you understand what it is doing.
###Code
def get_random_time_segment(segment_ms):
"""
Gets a random time segment of duration segment_ms in a 10,000 ms audio clip.
Arguments:
segment_ms -- the duration of the audio clip in ms ("ms" stands for "milliseconds")
Returns:
segment_time -- a tuple of (segment_start, segment_end) in ms
"""
segment_start = np.random.randint(low=0, high=10000-segment_ms) # Make sure segment doesn't run past the 10sec background
segment_end = segment_start + segment_ms - 1
return (segment_start, segment_end)
###Output
_____no_output_____
###Markdown
Next, suppose you have inserted audio clips at segments (1000,1800) and (3400,4500). I.e., the first segment starts at step 1000, and ends at step 1800. Now, if we are considering inserting a new audio clip at (3000,3600) does this overlap with one of the previously inserted segments? In this case, (3000,3600) and (3400,4500) overlap, so we should decide against inserting a clip here. For the purpose of this function, define (100,200) and (200,250) to be overlapping, since they overlap at timestep 200. However, (100,199) and (200,250) are non-overlapping. **Exercise**: Implement `is_overlapping(segment_time, existing_segments)` to check if a new time segment overlaps with any of the previous segments. You will need to carry out 2 steps:1. Create a "False" flag, that you will later set to "True" if you find that there is an overlap.2. Loop over the previous_segments' start and end times. Compare these times to the segment's start and end times. If there is an overlap, set the flag defined in (1) as True. You can use:```pythonfor ....: if ... = ...: ...```Hint: There is overlap if the segment starts before the previous segment ends, and the segment ends after the previous segment starts.
###Code
# GRADED FUNCTION: is_overlapping
def is_overlapping(segment_time, previous_segments):
"""
Checks if the time of a segment overlaps with the times of existing segments.
Arguments:
segment_time -- a tuple of (segment_start, segment_end) for the new segment
previous_segments -- a list of tuples of (segment_start, segment_end) for the existing segments
Returns:
True if the time segment overlaps with any of the existing segments, False otherwise
"""
segment_start, segment_end = segment_time
### START CODE HERE ### (≈ 4 line)
# Step 1: Initialize overlap as a "False" flag. (≈ 1 line)
overlap = False
# Step 2: loop over the previous_segments start and end times.
# Compare start/end times and set the flag to True if there is an overlap (≈ 3 lines)
for previous_start, previous_end in previous_segments:
if segment_start <= previous_end and segment_end >= previous_start:
overlap = True
### END CODE HERE ###
return overlap
overlap1 = is_overlapping((950, 1430), [(2000, 2550), (260, 949)])
overlap2 = is_overlapping((2305, 2950), [(824, 1532), (1900, 2305), (3424, 3656)])
print("Overlap 1 = ", overlap1)
print("Overlap 2 = ", overlap2)
###Output
_____no_output_____
###Markdown
**Expected Output**: **Overlap 1** False **Overlap 2** True Now, lets use the previous helper functions to insert a new audio clip onto the 10sec background at a random time, but making sure that any newly inserted segment doesn't overlap with the previous segments. **Exercise**: Implement `insert_audio_clip()` to overlay an audio clip onto the background 10sec clip. You will need to carry out 4 steps:1. Get a random time segment of the right duration in ms.2. Make sure that the time segment does not overlap with any of the previous time segments. If it is overlapping, then go back to step 1 and pick a new time segment.3. Add the new time segment to the list of existing time segments, so as to keep track of all the segments you've inserted. 4. Overlay the audio clip over the background using pydub. We have implemented this for you.
###Code
# GRADED FUNCTION: insert_audio_clip
def insert_audio_clip(background, audio_clip, previous_segments):
"""
Insert a new audio segment over the background noise at a random time step, ensuring that the
audio segment does not overlap with existing segments.
Arguments:
background -- a 10 second background audio recording.
audio_clip -- the audio clip to be inserted/overlaid.
previous_segments -- times where audio segments have already been placed
Returns:
new_background -- the updated background audio
"""
# Get the duration of the audio clip in ms
segment_ms = len(audio_clip)
### START CODE HERE ###
# Step 1: Use one of the helper functions to pick a random time segment onto which to insert
# the new audio clip. (≈ 1 line)
segment_time = get_random_time_segment(segment_ms)
# Step 2: Check if the new segment_time overlaps with one of the previous_segments. If so, keep
# picking new segment_time at random until it doesn't overlap. (≈ 2 lines)
while is_overlapping(segment_time, previous_segments):
segment_time = get_random_time_segment(segment_ms)
# Step 3: Add the new segment_time to the list of previous_segments (≈ 1 line)
previous_segments.append(segment_time)
### END CODE HERE ###
# Step 4: Superpose audio segment and background
new_background = background.overlay(audio_clip, position=segment_time[0])
return new_background, segment_time
np.random.seed(5)
audio_clip, segment_time = insert_audio_clip(backgrounds[0], activates[0], [(3790, 4400)])
audio_clip.export("insert_test.wav", format="wav")
print("Segment Time: ", segment_time)
IPython.display.Audio("insert_test.wav")
###Output
_____no_output_____
###Markdown
**Expected Output** **Segment Time** (2254, 3169)
###Code
# Expected audio
IPython.display.Audio("audio_examples/insert_reference.wav")
###Output
_____no_output_____
###Markdown
Finally, implement code to update the labels $y^{\langle t \rangle}$, assuming you just inserted an "activate." In the code below, `y` is a `(1,1375)` dimensional vector, since $T_y = 1375$. If the "activate" ended at time step $t$, then set $y^{\langle t+1 \rangle} = 1$ as well as for up to 49 additional consecutive values. However, make sure you don't run off the end of the array and try to update `y[0][1375]`, since the valid indices are `y[0][0]` through `y[0][1374]` because $T_y = 1375$. So if "activate" ends at step 1370, you would get only `y[0][1371] = y[0][1372] = y[0][1373] = y[0][1374] = 1`**Exercise**: Implement `insert_ones()`. You can use a for loop. (If you are an expert in python's slice operations, feel free also to use slicing to vectorize this.) If a segment ends at `segment_end_ms` (using a 10000 step discretization), to convert it to the indexing for the outputs $y$ (using a $1375$ step discretization), we will use this formula: ``` segment_end_y = int(segment_end_ms * Ty / 10000.0)```
###Code
# GRADED FUNCTION: insert_ones
def insert_ones(y, segment_end_ms):
"""
Update the label vector y. The labels of the 50 output steps strictly after the end of the segment
should be set to 1. By strictly we mean that the label of segment_end_y should be 0 while, the
50 followinf labels should be ones.
Arguments:
y -- numpy array of shape (1, Ty), the labels of the training example
segment_end_ms -- the end time of the segment in ms
Returns:
y -- updated labels
"""
# duration of the background (in terms of spectrogram time-steps)
segment_end_y = int(segment_end_ms * Ty / 10000.0)
# Add 1 to the correct index in the background label (y)
### START CODE HERE ### (≈ 3 lines)
y[:, segment_end_y+1:segment_end_y+51] = 1
# for i in range(segment_end_y, segment_end_y+50):
# if i < Ty:
# y[0, i] = 1
### END CODE HERE ###
return y
arr1 = insert_ones(np.zeros((1, Ty)), 9700)
plt.plot(insert_ones(arr1, 4251)[0,:])
print("sanity checks:", arr1[0][1333], arr1[0][634], arr1[0][635])
###Output
_____no_output_____
###Markdown
**Expected Output** **sanity checks**: 0.0 1.0 0.0 Finally, you can use `insert_audio_clip` and `insert_ones` to create a new training example.**Exercise**: Implement `create_training_example()`. You will need to carry out the following steps:1. Initialize the label vector $y$ as a numpy array of zeros and shape $(1, T_y)$.2. Initialize the set of existing segments to an empty list.3. Randomly select 0 to 4 "activate" audio clips, and insert them onto the 10sec clip. Also insert labels at the correct position in the label vector $y$.4. Randomly select 0 to 2 negative audio clips, and insert them into the 10sec clip.
###Code
# GRADED FUNCTION: create_training_example
def create_training_example(background, activates, negatives):
"""
Creates a training example with a given background, activates, and negatives.
Arguments:
background -- a 10 second background audio recording
activates -- a list of audio segments of the word "activate"
negatives -- a list of audio segments of random words that are not "activate"
Returns:
x -- the spectrogram of the training example
y -- the label at each time step of the spectrogram
"""
# Set the random seed
np.random.seed(18)
# Make background quieter
background = background - 20
### START CODE HERE ###
# Step 1: Initialize y (label vector) of zeros (≈ 1 line)
y = np.zeros((1, Ty))
# Step 2: Initialize segment times as empty list (≈ 1 line)
previous_segments = []
### END CODE HERE ###
# Select 0-4 random "activate" audio clips from the entire list of "activates" recordings
number_of_activates = np.random.randint(0, 5)
random_indices = np.random.randint(len(activates), size=number_of_activates)
random_activates = [activates[i] for i in random_indices]
### START CODE HERE ### (≈ 3 lines)
# Step 3: Loop over randomly selected "activate" clips and insert in background
for random_activate in random_activates:
# Insert the audio clip on the background
background, segment_time = insert_audio_clip(background, random_activate, previous_segments)
# Retrieve segment_start and segment_end from segment_time
# print(segment_time[0], segment_time[1])
segment_start, segment_end = segment_time
# Insert labels in "y"
y = insert_ones(y, segment_end)
### END CODE HERE ###
# Select 0-2 random negatives audio recordings from the entire list of "negatives" recordings
number_of_negatives = np.random.randint(0, 3)
random_indices = np.random.randint(len(negatives), size=number_of_negatives)
random_negatives = [negatives[i] for i in random_indices]
### START CODE HERE ### (≈ 2 lines)
# Step 4: Loop over randomly selected negative clips and insert in background
for random_negative in random_negatives:
# Insert the audio clip on the background
background, _ = insert_audio_clip(background, random_negative, previous_segments)
### END CODE HERE ###
# Standardize the volume of the audio clip
background = match_target_amplitude(background, -20.0)
# Export new training example
file_handle = background.export("train" + ".wav", format="wav")
print("File (train.wav) was saved in your directory.")
# Get and plot spectrogram of the new recording (background with superposition of positive and negatives)
x = graph_spectrogram("train.wav")
return x, y
x, y = create_training_example(backgrounds[0], activates, negatives)
###Output
_____no_output_____
###Markdown
**Expected Output** Now you can listen to the training example you created and compare it to the spectrogram generated above.
###Code
IPython.display.Audio("train.wav")
###Output
_____no_output_____
###Markdown
**Expected Output**
###Code
IPython.display.Audio("audio_examples/train_reference.wav")
###Output
_____no_output_____
###Markdown
Finally, you can plot the associated labels for the generated training example.
###Code
plt.plot(y[0])
###Output
_____no_output_____
###Markdown
**Expected Output** 1.4 - Full training setYou've now implemented the code needed to generate a single training example. We used this process to generate a large training set. To save time, we've already generated a set of training examples.
###Code
# Load preprocessed training examples
X = np.load("./XY_train/X.npy")
Y = np.load("./XY_train/Y.npy")
###Output
_____no_output_____
###Markdown
1.5 - Development setTo test our model, we recorded a development set of 25 examples. While our training data is synthesized, we want to create a development set using the same distribution as the real inputs. Thus, we recorded 25 10-second audio clips of people saying "activate" and other random words, and labeled them by hand. This follows the principle described in Course 3 that we should create the dev set to be as similar as possible to the test set distribution; that's why our dev set uses real rather than synthesized audio.
###Code
# Load preprocessed dev set examples
X_dev = np.load("./XY_dev/X_dev.npy")
Y_dev = np.load("./XY_dev/Y_dev.npy")
###Output
_____no_output_____
###Markdown
2 - ModelNow that you've built a dataset, lets write and train a trigger word detection model! The model will use 1-D convolutional layers, GRU layers, and dense layers. Let's load the packages that will allow you to use these layers in Keras. This might take a minute to load.
###Code
from keras.callbacks import ModelCheckpoint
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking, TimeDistributed, LSTM, Conv1D
from keras.layers import GRU, Bidirectional, BatchNormalization, Reshape
from keras.optimizers import Adam
###Output
_____no_output_____
###Markdown
2.1 - Build the modelHere is the architecture we will use. Take some time to look over the model and see if it makes sense. **Figure 3** One key step of this model is the 1D convolutional step (near the bottom of Figure 3). It inputs the 5511 step spectrogram, and outputs a 1375 step output, which is then further processed by multiple layers to get the final $T_y = 1375$ step output. This layer plays a role similar to the 2D convolutions you saw in Course 4, of extracting low-level features and then possibly generating an output of a smaller dimension. Computationally, the 1-D conv layer also helps speed up the model because now the GRU has to process only 1375 timesteps rather than 5511 timesteps. The two GRU layers read the sequence of inputs from left to right, then ultimately uses a dense+sigmoid layer to make a prediction for $y^{\langle t \rangle}$. Because $y$ is binary valued (0 or 1), we use a sigmoid output at the last layer to estimate the chance of the output being 1, corresponding to the user having just said "activate."Note that we use a uni-directional RNN rather than a bi-directional RNN. This is really important for trigger word detection, since we want to be able to detect the trigger word almost immediately after it is said. If we used a bi-directional RNN, we would have to wait for the whole 10sec of audio to be recorded before we could tell if "activate" was said in the first second of the audio clip. Implementing the model can be done in four steps: **Step 1**: CONV layer. Use `Conv1D()` to implement this, with 196 filters, a filter size of 15 (`kernel_size=15`), and stride of 4. [[See documentation.](https://keras.io/layers/convolutional/conv1d)]**Step 2**: First GRU layer. To generate the GRU layer, use:```X = GRU(units = 128, return_sequences = True)(X)```Setting `return_sequences=True` ensures that all the GRU's hidden states are fed to the next layer. Remember to follow this with Dropout and BatchNorm layers. **Step 3**: Second GRU layer. This is similar to the previous GRU layer (remember to use `return_sequences=True`), but has an extra dropout layer. **Step 4**: Create a time-distributed dense layer as follows: ```X = TimeDistributed(Dense(1, activation = "sigmoid"))(X)```This creates a dense layer followed by a sigmoid, so that the parameters used for the dense layer are the same for every time step. [[See documentation](https://keras.io/layers/wrappers/).]**Exercise**: Implement `model()`, the architecture is presented in Figure 3.
###Code
# GRADED FUNCTION: model
def model(input_shape):
"""
Function creating the model's graph in Keras.
Argument:
input_shape -- shape of the model's input data (using Keras conventions)
Returns:
model -- Keras model instance
"""
X_input = Input(shape = input_shape)
### START CODE HERE ###
# Step 1: CONV layer (≈4 lines)
X = Conv1D(196, kernel_size=15, strides=4)(X_input) # CONV1D
X = BatchNormalization()(X) # Batch normalization
X = Activation('relu')(X) # ReLu activation
X = Dropout(0.8)(X) # dropout (use 0.8)
# Step 2: First GRU Layer (≈4 lines)
X = GRU(units = 128, return_sequences = True)(X) # GRU (use 128 units and return the sequences)
X = Dropout(0.8)(X) # dropout (use 0.8)
X = BatchNormalization()(X) # Batch normalization
# Step 3: Second GRU Layer (≈4 lines)
X = GRU(units = 128, return_sequences = True)(X) # GRU (use 128 units and return the sequences)
X = Dropout(0.8)(X) # dropout (use 0.8)
X = BatchNormalization()(X) # Batch normalization
X = Dropout(0.8)(X) # dropout (use 0.8)
# Step 4: Time-distributed dense layer (≈1 line)
X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed (sigmoid)
### END CODE HERE ###
model = Model(inputs = X_input, outputs = X)
return model
model = model(input_shape = (Tx, n_freq))
###Output
_____no_output_____
###Markdown
Let's print the model summary to keep track of the shapes.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
**Expected Output**: **Total params** 522,561 **Trainable params** 521,657 **Non-trainable params** 904 The output of the network is of shape (None, 1375, 1) while the input is (None, 5511, 101). The Conv1D has reduced the number of steps from 5511 at spectrogram to 1375. 2.2 - Fit the model Trigger word detection takes a long time to train. To save time, we've already trained a model for about 3 hours on a GPU using the architecture you built above, and a large training set of about 4000 examples. Let's load the model.
###Code
model = load_model('./models/tr_model.h5')
###Output
_____no_output_____
###Markdown
You can train the model further, using the Adam optimizer and binary cross entropy loss, as follows. This will run quickly because we are training just for one epoch and with a small training set of 26 examples.
###Code
opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["accuracy"])
model.fit(X, Y, batch_size = 5, epochs=1)
###Output
_____no_output_____
###Markdown
2.3 - Test the modelFinally, let's see how your model performs on the dev set.
###Code
loss, acc = model.evaluate(X_dev, Y_dev)
print("Dev set accuracy = ", acc)
###Output
_____no_output_____
###Markdown
This looks pretty good! However, accuracy isn't a great metric for this task, since the labels are heavily skewed to 0's, so a neural network that just outputs 0's would get slightly over 90% accuracy. We could define more useful metrics such as F1 score or Precision/Recall. But let's not bother with that here, and instead just empirically see how the model does. 3 - Making PredictionsNow that you have built a working model for trigger word detection, let's use it to make predictions. This code snippet runs audio (saved in a wav file) through the network. <!--can use your model to make predictions on new audio clips.You will first need to compute the predictions for an input audio clip.**Exercise**: Implement predict_activates(). You will need to do the following:1. Compute the spectrogram for the audio file2. Use `np.swap` and `np.expand_dims` to reshape your input to size (1, Tx, n_freqs)5. Use forward propagation on your model to compute the prediction at each output step!-->
###Code
def detect_triggerword(filename):
plt.subplot(2, 1, 1)
x = graph_spectrogram(filename)
# the spectogram outputs (freqs, Tx) and we want (Tx, freqs) to input into the model
x = x.swapaxes(0,1)
x = np.expand_dims(x, axis=0)
predictions = model.predict(x)
plt.subplot(2, 1, 2)
plt.plot(predictions[0,:,0])
plt.ylabel('probability')
plt.show()
return predictions
###Output
_____no_output_____
###Markdown
Once you've estimated the probability of having detected the word "activate" at each output step, you can trigger a "chiming" sound to play when the probability is above a certain threshold. Further, $y^{\langle t \rangle}$ might be near 1 for many values in a row after "activate" is said, yet we want to chime only once. So we will insert a chime sound at most once every 75 output steps. This will help prevent us from inserting two chimes for a single instance of "activate". (This plays a role similar to non-max suppression from computer vision.) <!-- **Exercise**: Implement chime_on_activate(). You will need to do the following:1. Loop over the predicted probabilities at each output step2. When the prediction is larger than the threshold and more than 75 consecutive time steps have passed, insert a "chime" sound onto the original audio clipUse this code to convert from the 1,375 step discretization to the 10,000 step discretization and insert a "chime" using pydub:` audio_clip = audio_clip.overlay(chime, position = ((i / Ty) * audio.duration_seconds)*1000)`!-->
###Code
chime_file = "audio_examples/chime.wav"
def chime_on_activate(filename, predictions, threshold):
audio_clip = AudioSegment.from_wav(filename)
chime = AudioSegment.from_wav(chime_file)
Ty = predictions.shape[1]
# Step 1: Initialize the number of consecutive output steps to 0
consecutive_timesteps = 0
# Step 2: Loop over the output steps in the y
for i in range(Ty):
# Step 3: Increment consecutive output steps
consecutive_timesteps += 1
# Step 4: If prediction is higher than the threshold and more than 75 consecutive output steps have passed
if predictions[0,i,0] > threshold and consecutive_timesteps > 75:
# Step 5: Superpose audio and background using pydub
audio_clip = audio_clip.overlay(chime, position = ((i / Ty) * audio_clip.duration_seconds)*1000)
# Step 6: Reset consecutive output steps to 0
consecutive_timesteps = 0
audio_clip.export("chime_output.wav", format='wav')
###Output
_____no_output_____
###Markdown
3.3 - Test on dev examples Let's explore how our model performs on two unseen audio clips from the development set. Lets first listen to the two dev set clips.
###Code
IPython.display.Audio("./raw_data/dev/1.wav")
IPython.display.Audio("./raw_data/dev/2.wav")
###Output
_____no_output_____
###Markdown
Now lets run the model on these audio clips and see if it adds a chime after "activate"!
###Code
filename = "./raw_data/dev/1.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")
filename = "./raw_data/dev/2.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")
###Output
_____no_output_____
###Markdown
Congratulations You've come to the end of this assignment! Here's what you should remember:- Data synthesis is an effective way to create a large training set for speech problems, specifically trigger word detection. - Using a spectrogram and optionally a 1D conv layer is a common pre-processing step prior to passing audio data to an RNN, GRU or LSTM.- An end-to-end deep learning approach can be used to built a very effective trigger word detection system. *Congratulations* on finishing the fimal assignment! Thank you for sticking with us through the end and for all the hard work you've put into learning deep learning. We hope you have enjoyed the course! 4 - Try your own example! (OPTIONAL/UNGRADED)In this optional and ungraded portion of this notebook, you can try your model on your own audio clips! Record a 10 second audio clip of you saying the word "activate" and other random words, and upload it to the Coursera hub as `myaudio.wav`. Be sure to upload the audio as a wav file. If your audio is recorded in a different format (such as mp3) there is free software that you can find online for converting it to wav. If your audio recording is not 10 seconds, the code below will either trim or pad it as needed to make it 10 seconds.
###Code
# Preprocess the audio to the correct format
def preprocess_audio(filename):
# Trim or pad audio segment to 10000ms
padding = AudioSegment.silent(duration=10000)
segment = AudioSegment.from_wav(filename)[:10000]
segment = padding.overlay(segment)
# Set frame rate to 44100
segment = segment.set_frame_rate(44100)
# Export as wav
segment.export(filename, format='wav')
###Output
_____no_output_____
###Markdown
Once you've uploaded your audio file to Coursera, put the path to your file in the variable below.
###Code
your_filename = "audio_examples/my_audio.wav"
preprocess_audio(your_filename)
IPython.display.Audio(your_filename) # listen to the audio you uploaded
###Output
_____no_output_____
###Markdown
Finally, use the model to predict when you say activate in the 10 second audio clip, and trigger a chime. If beeps are not being added appropriately, try to adjust the chime_threshold.
###Code
chime_threshold = 0.5
prediction = detect_triggerword(your_filename)
chime_on_activate(your_filename, prediction, chime_threshold)
IPython.display.Audio("./chime_output.wav")
###Output
_____no_output_____ |
princeCVMLI2012/6.6.1 skin detection.ipynb | ###Markdown
Prepare the dataUse dataset Skin_NonSkin.txt from [here](http://archive.ics.uci.edu/ml/datasets/Skin+Segmentation) \[1\].> The skin dataset is collected by randomly sampling B,G,R values from face images of various age groups (young, middle, and old), race groups (white, black, and asian), and genders obtained from FERET database and PAL database. Total learning sample size is 245057; out of which 50859 is the skin samples and 194198 is non-skin samples. Color FERET Image Database, PAL Face Database from Productive Aging Laboratory, The University of Texas at Dallas.
###Code
data = np.loadtxt('Skin_NonSkin.txt')
###Output
_____no_output_____
###Markdown
data1 is the skin samples, data2 is the non-skin samples
###Code
data1 = data[data[:,3]==1,:]
data2 = data[data[:,3]==2,:]
###Output
_____no_output_____
###Markdown
Then split the whole dataset into training dataset and testing dataset. I randomly take $n$ samples from data1 and data2 respectively, to form a test dataset. d1train and d2train are the training dataset, d1test and d2test are the testing dataset. np.random.seed is used to ensure a reproducible result.
###Code
np.random.seed(1)
np.random.shuffle(data1)
np.random.shuffle(data2)
ntest = 1000
d1train = data1[:-ntest,:]
d2train = data2[:-ntest,:]
d1test = data1[-ntest:,:]
d2test = data2[-ntest:,:]
###Output
_____no_output_____
###Markdown
Create *generative* modelTwo things you can play with here \[2\]:* Color space* ModelHere I am using RGB color space and three-dimension gaussian model. u1 and s1 are the mean and covariance matrix of the skin model, u2 and s2 are the mean and covariance of the non-skin model.
###Code
u1 = np.average(d1train[:,0:3],axis=0)
u2 = np.average(d2train[:,0:3],axis=0)
s1 = np.cov(d1train[:,0:3].T)
s2 = np.cov(d2train[:,0:3].T)
###Output
_____no_output_____
###Markdown
Calculate the prior probability over the states as $$Pr\left(w\right)=Bern_w\left[\lambda\right].$$ pw1 is $Pr\left(w=\text{skin}\right)=\lambda$. pw2 is $Pr\left(w=\text{non-skin}\right)=1-\lambda$
###Code
pw1 = d1train.shape[0]*1./d2train.shape[0]
pw2 = 1-pw1
###Output
_____no_output_____
###Markdown
Calculate the inferenceClass conditional likelihood/probability of each pixel of being skin or non-skin pixel
###Code
ll11 = multivariate_normal.pdf(d1test[:,0:3],u1,s1)
ll12 = multivariate_normal.pdf(d1test[:,0:3],u2,s2)
ll21 = multivariate_normal.pdf(d2test[:,0:3],u1,s1)
ll22 = multivariate_normal.pdf(d2test[:,0:3],u2,s2)
###Output
_____no_output_____
###Markdown
Calculate posterior probability according to Bayes' shown as equation 6.14$$Pr\left(w=1|x\right)=\frac{Pr\left(x|w=1\right)Pr\left(w=1\right)}{\sum_{k=0}^1Pr\left(x|w=k\right)Pr\left(w=k\right)}$$
###Code
d1bayes = ll11*pw1/(ll11*pw1+ll12*pw2)
d2bayes = ll22*pw2/(ll21*pw1+ll22*pw2)
###Output
_____no_output_____
###Markdown
Use a threshold to classify a pixel as skin or non-skin
###Code
threshold = 0.5
d1err = d1test[d1bayes<threshold,:]
d2err = d2test[d2bayes<threshold,:]
print('False positive {}, {}%'.format(d1err.shape[0],d1err.shape[0]*100./ntest))
print('False negative {}, {}%'.format(d2err.shape[0],d2err.shape[0]*100./ntest))
###Output
False positive 56, 5.6%
False negative 2, 0.2%
|
pipeline/eQTL_analysis_archive.ipynb | ###Markdown
Bulk RNA-seq eQTL analysis-external bed input.This notebook provide a master control on the XQTL workflow so it can automate variouse s on multiple data collection as proposed.Input: A recipe file,each row is a data collection and with the following column: Theme name of dataset, must be different, each uni_study analysis will be performed in a folder named after each, meta analysis will be performed in a folder named as {study1}_{study2} The column name must contain the and be the first column genotype_list {Path to file} molecular_pheno {Path to file} region_list (list of regions to be analzed) {Path to file} covariate_file {Path to file} factor_analysis_opt "APEX" vs "PEER" for factor analysis LD options: "In-sample" LD vs {path to reference panel} QTL_tool_option "APEX" vs "TensorQTL" for QTL association QTL_analysis_option {Int for cis window} vs "trans" Populations The populations from which of the samples was drawn Conditions: The nature of molecular phenotype note: Only data collection from the same Populations and conditions will me merged to perform Fix effect meta analysis Output: ... Generation of MWEThis is the code to generate the mwe recipe and LD_recipe on csg cluster
###Code
Recipe_temp = pd.DataFrame( {"Theme" : ["AC","DLPFC","PCC"] ,
"genotype_list" : ["/home/hs3163/GIT/ADSPFG-xQTL/MWE/mwe_genotype_list",
"/home/hs3163/GIT/ADSPFG-xQTL/MWE/mwe_genotype_list",
"/home/hs3163/GIT/ADSPFG-xQTL/MWE/mwe_genotype_list"],
"molecular_pheno" : ["/home/hs3163/Project/Rosmap/data/gene_exp/AC/geneTpmResidualsAgeGenderAdj_rename.txt",
"/home/hs3163/Project/Rosmap/data/gene_exp/DLPFC/geneTpmResidualsAgeGenderAdj_rename.txt",
"/home/hs3163/Project/Rosmap/data/gene_exp/PCC/geneTpmResidualsAgeGenderAdj_rename.txt"],
"region_list" : ["~/GIT/ADSPFG-xQTL/MWE/mwe_region",
"~/GIT/ADSPFG-xQTL/MWE/mwe_region",
"~/GIT/ADSPFG-xQTL/MWE/mwe_region"] ,
"covariate_file" : ["/home/hs3163/GIT/ADSPFG-xQTL/MWE/MWE.cov","/home/hs3163/GIT/ADSPFG-xQTL/MWE/MWE.cov","/home/hs3163/GIT/ADSPFG-xQTL/MWE/MWE.cov"],
"factor_analysis_opt" : ["APEX","APEX","APEX"],
"LD_Recipe": ["~/GIT/ADSPFG-xQTL/MWE/LD_Recipe","~/GIT/ADSPFG-xQTL/MWE/LD_Recipe","~/GIT/ADSPFG-xQTL/MWE/LD_Recipe"],
"QTL_tool_option" : ["APEX","APEX","APEX"],
"QTL_analysis_option" : ["cis","cis","cis"],
"cis_windows" : [500000,500000,5000000],
"Metal" : ["T","T","F"]}).to_csv("/home/hs3163/GIT/ADSPFG-xQTL/MWE/mwe_recipe_example","\t")
### note: Only data collection from the same Populations and conditions will me merged to perform Fix effect meta analysis
pd.DataFrame( {"Theme" : ["AC","DLPFC","PCC"] ,
"genotype_list" : [" /mnt/mfs/statgen/ROSMAP_xqtl/Rosmap_wgs_genotype_list.txt",
" /mnt/mfs/statgen/ROSMAP_xqtl/Rosmap_wgs_genotype_list.txt",
" /mnt/mfs/statgen/ROSMAP_xqtl/Rosmap_wgs_genotype_list.txt"],
"molecular_pheno" : ["/home/hs3163/Project/Rosmap/data/gene_exp/AC/geneTpmResidualsAgeGenderAdj_rename.txt",
"/home/hs3163/Project/Rosmap/data/gene_exp/DLPFC/geneTpmResidualsAgeGenderAdj_rename.txt",
"/home/hs3163/Project/Rosmap/data/gene_exp/PCC/geneTpmResidualsAgeGenderAdj_rename.txt"],
"region_list" : ["/home/hs3163/Project/Rosmap/data/gene_exp/AC/geneTpmResidualsAgeGenderAdj_rename_region_list.txt",
"/home/hs3163/Project/Rosmap/data/gene_exp/AC/geneTpmResidualsAgeGenderAdj_rename_region_list.txt",
"/home/hs3163/Project/Rosmap/data/gene_exp/AC/geneTpmResidualsAgeGenderAdj_rename_region_list.txt"] ,
"covariate_file" : ["None","None","None"],
"factor_analysis_opt" : ["BiCV","BiCV","BiCV"],
"LD_Recipe": ["~/GIT/ADSPFG-xQTL/MWE/LD_Recipe","~/GIT/ADSPFG-xQTL/MWE/LD_Recipe","~/GIT/ADSPFG-xQTL/MWE/LD_Recipe"],
"QTL_tool_option" : ["APEX","APEX","APEX"],
"QTL_analysis_option" : ["cis","cis","cis"],
"cis_windows" : [500000,500000,500000],
"Metal" : ["T","T","F"]}).to_csv("/home/hs3163/GIT/xqtl-pipeline/ROSMAP_recipe_example","\t", index = 0)
home/hs3163/GIT/xqtl-pipeline/ROSMAP_recipe_example
pd.DataFrame({"ld_file_prefix" : ["/mnt/mfs/statgen/neuro-twas/mv_wg/cache_arch/cache/geneTpmResidualsAgeGenderAdj_rename.","/mnt/mfs/statgen/neuro-twas/mv_wg/cache_arch/cache/geneTpmResidualsAgeGenderAdj_rename."],
"ld_file_surfix" : [".merged.ld.rds",".merged.ld.rds"]}).to_csv("~/GIT/ADSPFG-xQTL/MWE/LD_Recipe",sep = "\t")
nohup sos run /home/hs3163/GIT/xqtl-pipeline/pipeline/complete_analysis/eQTL_analysis.ipynb QTL \
--recipe /home/hs3163/GIT/ADSPFG-xQTL/MWE/mwe_recipe_example \
--wd ./ \
--exe_dir "/home/hs3163/GIT/xqtl-pipeline/pipeline/" &
nohup sos dryrun /home/hs3163/GIT/xqtl-pipeline/pipeline/complete_analysis/eQTL_analysis.ipynb mash_to_vcf \
--recipe /home/hs3163/GIT/xqtl-pipeline/ROSMAP_recipe_example --wd ./ --exe_dir "~/GIT/xqtl-pipeline/pipeline/" -s build &
nohup sos dryrun /home/hs3163/GIT/xqtl-pipeline/pipeline/complete_analysis/eQTL_analysis.ipynb phenotype_reformatting_by_gene \
--recipe /home/hs3163/GIT/xqtl-pipeline/ROSMAP_recipe_example --wd ./ --exe_dir "~/GIT/xqtl-pipeline/pipeline/" -s build &
nohup sos dryrun /home/hs3163/GIT/xqtl-pipeline/pipeline/complete_analysis/eQTL_analysis.ipynb genotype_reformatting_per_gene \
--recipe /home/hs3163/GIT/xqtl-pipeline/ROSMAP_recipe_example --wd ./ --exe_dir "~/GIT/xqtl-pipeline/pipeline/" -s build &
nohup sos dryrun /home/hs3163/GIT/xqtl-pipeline/pipeline/complete_analysis/eQTL_analysis.ipynb mixture_prior \
--recipe /home/hs3163/GIT/xqtl-pipeline/ROSMAP_recipe_example --wd ./ --exe_dir "~/GIT/xqtl-pipeline/pipeline/" -s build &
nohup sos run ~/GIT/bioworkflows/GWAS/PCA.ipynb flashpca \
--genoFile /mnt/mfs/statgen/xqtl_workflow_testing/ROSMAP/data_preprocessing/genotype/qc/PCC.mergrd.filtered.prune.unrelated.bed \
--name PCC \
--container_lmm /mnt/mfs/statgen/containers/xqtl_pipeline_sif/flashpcaR.sif \
--cwd /mnt/mfs/statgen/xqtl_workflow_testing/demo/test_pca/ \
-J 200 -q csg -c /home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml &
nohup sos run ~/GIT/bioworkflows/GWAS/PCA.ipynb project_samples:1 \
--genoFile /mnt/mfs/statgen/xqtl_workflow_testing/ROSMAP/data_preprocessing/genotype/qc/PCC.mergrd.filtered.prune.related.bed \
--pca_model /mnt/mfs/statgen/xqtl_workflow_testing/demo/test_pca/PCC.mergrd.filtered.prune.unrelated.PCC.pca.rds \
--name PCC \
--container_lmm /mnt/mfs/statgen/containers/xqtl_pipeline_sif/flashpcaR.sif \
--cwd /mnt/mfs/statgen/xqtl_workflow_testing/demo/test_pca/ \
-J 200 -q csg -c /home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml &
###Output
_____no_output_____
###Markdown
Example for running the workflowThis will run the workflow from via several submission and save the output to nohup.out Other example workflow:These command run each of the substep to test them individually
###Code
[global]
## The aforementioned input recipe
parameter: recipe = path
## Overall wd, the file structure of analysis is wd/[steps]/[sub_dir for each steps]
parameter: wd = path(".")
## Diretory to the excutable
parameter: exe_dir = path("~/GIT/ADSPFG-xQTL/workflow")
parameter: container = '/mnt/mfs/statgen/containers/twas_latest.sif'
parameter: container_base_bioinfo = '/mnt/mfs/statgen/containers/xqtl_pipeline_sif/base-bioinfo.sif'
parameter: container_apex = '/mnt/mfs/statgen/containers/xqtl_pipeline_sif/apex.sif'
parameter: container_PEER = '/mnt/mfs/statgen/containers/xqtl_pipeline_sif/PEER.sif'
parameter: container_TensorQTL = '/mnt/mfs/statgen/containers/xqtl_pipeline_sif/TensorQTL.sif'
parameter: container_mvsusie = '/mnt/mfs/statgen/containers/twas_latest.sif'
parameter: container_METAL = '/mnt/mfs/statgen/containers/xqtl_pipeline_sif/METAL.sif'
parameter: container_flashpca = '/mnt/mfs/statgen/containers/xqtl_pipeline_sif/flashpcaR.sif'
parameter: yml = "/home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml"
import pandas as pd
input_inv = pd.read_csv(recipe, sep = "\t")
Metal_theme = input_inv.query("Metal == 'T'")["Theme"].to_list()
Metal_theme_str = "-".join(Metal_theme)
Non_Metal_theme = input_inv.query("Metal != 'T'")["Theme"].to_list()
Non_Metal_theme.append(Metal_theme_str)
Theme_Prefix = "_".join(Non_Metal_theme)
parameter: LD_Recipe = path(input_inv["LD_Recipe"][0])
input_inv = input_inv.to_dict("records")
import os
###Output
_____no_output_____
###Markdown
Molecular Phenotype Calling Data Preprocessing Molecular Phenotype Processing
###Code
#[Normalization]
#import os
#input: for_each = "input_inv"
#skip_if( os.path.exists(_input_inv["molecular_pheno"]))
#output: f'{wd:a}/data_preprocessing/normalization/{name}.mol_phe.bed.gz'
#bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
# sos run $[exe_dir]/data_preprocessing/phenotype/GWAS_QC.ipynb output \
# --counts_gct $[_input_inv["genecount_table"]] \
# --tpm_gct $[_input_inv["geneTpm_table"]] \
# --sample_participant_lookup $[_input_inv["sample_index"]] \
# --vcf_chr_list $[_input_inv["vcf_chr_list"]] \
# --container $[container_gtex] \
# --name $[_input_inv["Theme"]] \
# --wd $[wd:a]/data_preprocessing/normalization/ \
# --container $[container_base_bioinfo] \
# -J 200 -q csg -c $[yml] &
[annotation]
## Must be ran with internet connection
import os
input: for_each = "input_inv"
output: f'{wd:a}/data_preprocessing/annotation/{_input_inv["Theme"]}.{path(_input_inv["molecular_pheno"]):bn}.annotated.bed.gz'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/data_preprocessing/phenotype/annotation.ipynb annotation \
--molecular_pheno_whole $[_input_inv["molecular_pheno"]] \
--wd $[wd:a]/data_preprocessing/annotation \
--name $[_input_inv["Theme"]] --container $[container_base_bioinfo] -s build &
[phenotype_reformatting]
input: output_from("residual_phenotype"),group_with = "input_inv"
output: per_chrom_pheno_list = f'{wd:a}/data_preprocessing/phenotype_reformat/{_input_inv["Theme"]}.processed_phenotype.per_chrom.recipe',
pheno_mod = f'{wd:a}/data_preprocessing/phenotype_reformat/{_input_inv["Theme"]}.for_pca.mol_phe.exp'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/data_preprocessing/phenotype/phenotype_formatting.ipynb reformat \
--molecular_pheno_whole $[_input] \
--region_list $[_input_inv["region_list"]] \
--wd $[wd:a]/data_preprocessing/phenotype_reformat/ \
--name $[_input_inv["Theme"]] --container $[container_base_bioinfo] \
-J 200 -q csg -c $[yml]
###Output
_____no_output_____
###Markdown
The reformatiing by gene is particularly lenghthy, so to avoid exceesive waiting time, it is set to be a seperate substep
###Code
[phenotype_reformatting_by_gene]
input: output_from("residual_phenotype"),group_with = "input_inv"
output: per_gene_pheno_list = f'{wd:a}/data_preprocessing/phenotype_reformat/{_input_inv["Theme"]}.processed_phenotype.per_gene.recipe'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/data_preprocessing/phenotype/phenotype_formatting.ipynb partition_by_gene \
--molecular_pheno_whole $[_input] \
--region_list $[_input_inv["region_list"]] \
--wd $[wd:a]/data_preprocessing/phenotype_reformat/ \
--name $[_input_inv["Theme"]] --container $[container_base_bioinfo] \
-J 200 -q csg -c $[yml]
###Output
_____no_output_____
###Markdown
Genotype Processing
###Code
[genotype_QC]
input: for_each = "input_inv"
output: merged_plink = f'{wd:a}/data_preprocessing/genotype/qc/{_input_inv["Theme"]}.mergrd.filtered.prune.bed',
unrelated = f'{wd:a}/data_preprocessing/genotype/qc/{_input_inv["Theme"]}.mergrd.filtered.prune.unrelated.bed',
related = f'{wd:a}/data_preprocessing/genotype/qc/{_input_inv["Theme"]}.mergrd.filtered.prune.related.bed'
#task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/data_preprocessing/genotype/GWAS_QC.ipynb qc \
--genotype_list $[_input_inv["genotype_list"]] \
--name $[_input_inv["Theme"]] \
--container_lmm $[container_base_bioinfo] \
--cwd $[wd:a]/data_preprocessing/genotype/qc/ \
-J 200 -q csg -c $[yml]
[genotype_reformatting]
import pandas as pd
input: output_from("genotype_QC")["merged_plink"], group_with = "input_inv"
name = _input_inv["Theme"]
output: vcf_list = f'{wd}/data_preprocessing/genotype/{name}_per_chrom_vcf/{name}.vcf_chrom_list.txt',
per_chrom_plink_list = f'{wd}/data_preprocessing/genotype/{name}_per_chrom_plink/{name}.plink_chrom_list.txt'
#task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/data_preprocessing/genotype/genotype_formatting.ipynb plink2vcf \
--genoFile $[_input] \
--name $[_input_inv["Theme"]] \
--container $[container_base_bioinfo] \
--region_list $[_input_inv["region_list"]] \
--wd $[wd:a]/data_preprocessing/genotype/ \
-J 200 -q csg -c /home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml
sos run $[exe_dir]/data_preprocessing/genotype/genotype_formatting.ipynb plink_by_chrom \
--genoFile $[_input] \
--name $[_input_inv["Theme"]] \
--region_list $[_input_inv["region_list"]] \
--container $[container_base_bioinfo] \
--wd $[wd:a]/data_preprocessing/genotype/ \
-J 200 -q csg -c $[yml]
###Output
_____no_output_____
###Markdown
The reformatiing by gene is particularly lenghthy, so to avoid exceesive waiting time, it is set to be a seperate substep
###Code
[genotype_reformatting_per_gene]
import pandas as pd
input: output_from("genotype_QC")["merged_plink"], group_with = "input_inv"
name = _input_inv["Theme"]
output: per_gene_plink = f'{wd}/data_preprocessing/genotype/{name}_per_gene_plink/{name}.plink_gene_list.txt'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/data_preprocessing/genotype/genotype_formatting.ipynb plink_by_gene \
--genoFile $[_input] \
--name $[_input_inv["Theme"]] \
--region_list $[_input_inv["region_list"]] \
--container $[container_base_bioinfo] \
--region_list $[_input_inv["region_list"]] \
--wd $[wd:a]/data_preprocessing/genotype/ \
-J 2000 -q csg -c $[yml]
[LD]
import pandas as pd
input: output_from("genotype_reformatting")["per_gene_plink"],group_with = "input_inv"
output: f'{wd}/data_preprocessing/genotype/LD/{_input_inv["Theme"]}._LD_recipe'
#task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/data_preprocessing/genotype/LD.ipynb LD \
--genotype_list $[_input] \
--name $[_input_inv["Theme"]] \
--container $[container_base_bioinfo] \
--wd $[wd:a]/data_preprocessing/genotype/LD/ \
-J 200 -q csg -c $[yml]
[LD_Recipe]
input: output_from("LD"), group_by = "all"
output: f'{wd:a}/data_preprocessing/genotype/LD/sumstat_list'
python: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
import pandas as pd
input_list = [$[_input:r,]]
ld_recipe = pd.read_csv(input_list[0],sep = "\t")
for x in range(1,len(input_list)):
ld_recipe = ld_recipe.append(pd.read_csv(input_list[x],sep = "\t"))
ld_recipe.to_csv("$[_output]", index = 0 , sep = "\t")
[GRM]
import pandas as pd
input: output_from("genotype_reformatting")["per_chrom_plink_list"],group_with = "input_inv"
output: f'{wd}/data_preprocessing/genotype/grm/{_input_inv["Theme"]}.loco_grm_list.txt'
#task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/data_preprocessing/genotype/GRM.ipynb GRM \
--genotype_list $[_input] \
--name $[_input_inv["Theme"]] \
--container $[container_base_bioinfo] \
--wd $[wd:a]/data_preprocessing/genotype/grm/ \
-J 200 -q csg -c $[yml]
###Output
_____no_output_____
###Markdown
Factor analysis
###Code
[factor]
input: output_from("genotype_reformatting")["vcf_list"],output_from("annotation"),group_with = "input_inv"
output: f'{wd}/data_preprocessing/covariate/{_input_inv["Theme"]}.{_input_inv["factor_analysis_opt"]}.cov'
#task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/data_preprocessing/covariate/$[_input_inv["factor_analysis_opt"]]_factor.ipynb $[_input_inv["factor_analysis_opt"]] \
--molecular_pheno $[_input[1]] \
--genotype_list $[_input[0]] \
--name $[_input_inv["Theme"]] \
--wd $[wd:a]/data_preprocessing/covariate/ \
-J 200 -q csg -c $[yml] $[f'--covariate {_input_inv["covariate_file"]}' if os.path.exists(_input_inv["covariate_file"]) else f''] \
--container $[container_apex if _input_inv["factor_analysis_opt"] == "BiCV" else container_PEER]
[residual_phenotype]
input: output_from("factor"), output_from("annotation"),group_with = "input_inv"
output: f'{wd}/data_preprocessing/phenotype/{_input_inv["Theme"]}.mol_phe.resid.bed.gz'
#task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/data_preprocessing/covariate/remove_covariates.ipynb Residual_Y \
--molecular_pheno_whole $[_input[1]] \
--factor $[_input[0]] \
--wd $[wd]/data_preprocessing/phenotype \
--name $[_input_inv["Theme"]] --container $[container_base_bioinfo] \
-J 200 -q csg -c $[yml]
[pca]
import pandas as pd
input: output_from("genotype_QC")["related"],output_from("genotype_QC")["unrelated"],group_with = "input_inv"
output: f'{wd}/data_preprocessing/covariate/pca/{_input[0]:bn}.{_input_inv["Theme"]}.pca.projected.rds'
#task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/data_preprocessing/covariate/PCA.ipynb flashpca \
--genoFile $[_input[1]] \
--name $[_input_inv["Theme"]] \
--container_lmm $[container_flashpca] \
--cwd $[wd:a]/data_preprocessing/covariate/pca/ \
-J 200 -q csg -c $[yml]
sos run $[exe_dir]/data_preprocessing/covariate/PCA.ipynb project_samples:1 \
--genoFile $[_input[0]] \
--pca_model $[wd:a]/data_preprocessing/covariate/pca/$[_input[1]:bn].$[_input_inv["Theme"]].pca.rds \
--name $[_input_inv["Theme"]] \
--container_lmm $[container_flashpca] \
--cwd $[wd:a]/data_preprocessing/covariate/pca/ \
-J 200 -q csg -c $[yml]
[pca_factor_merge]
import pandas as pd
input: output_from("pca"),output_from("factor"),group_with = "input_inv"
output: f'{wd}/data_preprocessing/covariate/{_input[1]:bn}.pca.cov'
#task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/data_preprocessing/covariate/merge_covariate.ipynb pca_factor_merge \
--factor_and_covariate $[_input[1]] \
--PC $[_input[0]] \
--container $[container_base_bioinfo] \
--wd $[wd:a]/data_preprocessing/covariate/ \
-J 200 -q csg -c $[yml]
###Output
_____no_output_____
###Markdown
QTL associations
###Code
[QTL_1]
input: output_from("pca_factor_merge"),output_from("GRM"),output_from("phenotype_reformatting")["per_chrom_pheno_list"],output_from("genotype_reformatting")["vcf_list"], output_from("genotype_reformatting")["per_chrom_plink_list"] ,group_with = "input_inv"
output: f'{wd:a}/association_scan/{_input_inv["QTL_tool_option"]}/{_input_inv["QTL_analysis_option"]}/{_input_inv["Theme"]}.{_input_inv["QTL_tool_option"]}_QTL_recipe.tsv'
#task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/association_scan/$[_input_inv["QTL_tool_option"]]/$[_input_inv["QTL_tool_option"]].ipynb $[_input_inv["QTL_tool_option"]]_$[_input_inv["QTL_analysis_option"]] \
--molecular_pheno_list $[_input[2]] \
--covariate $[_input[0]]\
--genotype_file_list $[_input[3]] \
--container $[container_apex if _input_inv["QTL_tool_option"] == "APEX" else container_TensorQTL] \
--window $[_input_inv["cis_windows"]] \
--name $[_input_inv["Theme"]] \
--wd $[wd:a]/association_scan/$[_input_inv["QTL_tool_option"]]/$[_input_inv["QTL_analysis_option"]]/ \
-J 200 -q csg -c $[yml] $[f'--grm_list {_input[1]}' if _input_inv["QTL_tool_option"] == "APEX" else f'']
###Output
_____no_output_____
###Markdown
Example:sos run /home/hs3163/GIT/ADSPFG-xQTL/workflow/QTL_association/QTL_association.ipynb APEX_cis_Recipe \ --recipe data_preprocessing/PCC.data_proc_output_recipe.tsv \ --container /mnt/mfs/statgen/containers/apex.sif \ --window 500000 \ --name PCC \ --wd /mnt/mfs/statgen/xqtl_workflow_testing/testing_no_cov/QTL_association/ \ -J 200 -q csg -c /home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml
###Code
[QTL_2]
input: group_by = "all"
output: f'{_input[0]:d}/sumstat_list'
python: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
import pandas as pd
input_list = [$[_input:r,]]
input_inv = $[input_inv]
sumstat_list = pd.read_csv(input_list[0],sep = "\t")
sumstat_list = sumstat_list.sort_values('#chr')
for x in range(1,len(input_list)):
sumstat_list = sumstat_list.merge(pd.read_csv(input_list[x],sep = "\t"), on = "#chr")
sumstat_list.columns = ["#chr"] + pd.DataFrame(input_inv)["Theme"].values.tolist()
sumstat_list.to_csv("$[_output]", index = 0 , sep = "\t")
###Output
_____no_output_____
###Markdown
Meta AnalysisInput: 1. A recipe generated from the combination of previouse stepsOutput: 1. Recipe for Prior, Vhat, rds input, resid corr3. vcf
###Code
[METAL]
input: output_from("QTL_2")
METAL_sumstat_list = f'{_input}.METAL.tsv'
sumstat_list = pd.read_csv(_input,sep = "\t")[["#chr"] + Metal_theme].to_csv(METAL_sumstat_list,sep = "\t", index = 0)
output: f'{wd}/multivariate/METAL/{Metal_theme_str}.METAL_list.txt'
##task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/multivariate/METAL/METAL.ipynb METAL \
--sumstat_list_path $[METAL_sumstat_list] \
--wd $[wd:a]/multivariate/METAL/ --container $[container_METAL] \
-J 200 -q csg -c $[yml]
###Output
_____no_output_____
###Markdown
MASH
###Code
[sumstat_merger_1]
parameter: sumstat_list = f'{wd}/multivariate/METAL/{Metal_theme_str}.METAL_list.txt'
input: output_from("QTL_2")
output: yml_list = f'{wd}/multivariate/MASH/Prep/yml_list.txt',
qced_sumstat_list = f'{wd}/multivariate/MASH/Prep/qc_sumstat_list.txt'
##task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/misc/yml_generator.ipynb yml_list \
--sumstat_list_path $[_input] \
--wd $[wd:a]/multivariate/MASH/Prep/ --container $[container_base_bioinfo]
sos run $[exe_dir]/misc/summary_stats_merger.ipynb \
--yml_list $[_output[0]] \
--cwd $[wd:a]/multivariate/MASH/Prep/ --container $[container_base_bioinfo] --keep_ambiguous True \
-J 200 -q csg -c $[yml]
[sumstat_merger_2]
input: named_output("qced_sumstat_list")
name = "_".join(pd.DataFrame(input_inv)["Theme"].values.tolist())
output: f'{wd}/multivariate/MASH/Prep/merge/RDS/{name}.analysis_unit'
##task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/multivariate/MASH/sumstat_processing.ipynb processing \
--sumstat_list_path $[_input] \
--region_list $[input_inv[0]["region_list"]] \
--wd $[wd:a]/multivariate/MASH/Prep/ --container $[container_base_bioinfo] \
-J 2000 -q csg -c $[yml]
[extract_effect]
input: output_from("sumstat_merger")
name = "_".join(pd.DataFrame(input_inv)["Theme"].values.tolist())
output: f'{wd}/multivariate/MASH/Prep/{name}.rds'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/multivariate/MASH/Signal_Extraction.ipynb extract_effects \
--cwd $[wd:a]/multivariate/MASH/Prep/ \
--container $[container_base_bioinfo] \
--name $[name] \
--analysis_units $[_input] \
-J 2000 -q csg -c $[yml]
[mash_model]
input: output_from("extract_effect")
name = "_".join(pd.DataFrame(input_inv)["Theme"].values.tolist())
output: MASH_model = f"{wd}/multivariate/MASH/{name}.EZ.V_simple.mash_model.rds",
resid_corr = f"{wd}/multivariate/MASH/{name}.EZ.V_simple.rds"
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/multivariate/MASH/mashr.ipynb mash \
--cwd $[wd:a]/multivariate/MASH/ \
--container $[container_mvsusie] \
--output_prefix $[name] \
--data $[_input] \
-J 200 -q csg -c $[yml]
[mash_posterior]
input: output_from("mash_model")["MASH_model"], output_from("sumstat_merger")
name = "_".join(pd.DataFrame(input_inv)["Theme"].values.tolist())
parameter: analysis_unit = _input[1]
output: f'{wd}/multivariate/MASH/mash_output_list'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/multivariate/MASH/mashr.ipynb posterior \
--cwd $[wd:a]/multivariate/MASH/ \
--container $[container_mvsusie] \
--output_prefix $[name] \
--analysis_units $[analysis_unit] \
-J 2000 -q csg -c $[yml]
[mash_to_vcf]
input: output_from("mash_posterior")
name = "_".join(pd.DataFrame(input_inv)["Theme"].values.tolist())
output: f'{wd}/multivariate/MASH/mash_vcf/vcf_output_list.txt'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/misc/rds_to_vcf.ipynb rds_to_vcf \
--wd $[wd:a]/multivariate/MASH/ \
--name $[name] \
--analysis_units $[_input] \
-J 2000 -q csg -c $[yml]
###Output
_____no_output_____
###Markdown
Fine mapping
###Code
[mixture_prior]
input: output_from("mash_model")["MASH_model"], output_from("extract_effect")
name = "_".join(pd.DataFrame(input_inv)["Theme"].values.tolist())
output: f'{wd}/fine_mapping/mixture_prior/{name}.ed_bovy.V_simple.rds'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/multivariate/MASH/mixture_prior.ipynb ed_bovy \
--cwd $[wd:a]/fine_mapping/mixture_prior/ \
--container $[container_mvsusie] \
--name $[name] \
--data $[_input[1]] \
--mixture_components_dir $[_input[0]:d] \
-J 200 -q csg -c $[yml]
nohup sos run /home/hs3163/GIT/xqtl-pipeline/pipeline/multivariate/MASH/mixture_prior.ipynb ed_bovy --model_data fine_mapping/mixture_prior/AC_DLPFC_PCC.ed_bovy.V_simple.rds --cwd ./ --container /mnt/mfs/statgen/containers/twas_latest.sif --name AC_DLPFC_PCC --data multivariate/MASH/Prep/AC_DLPFC_PCC.rds --mixture_components_dir multivariate/MASH -J 200 -q csg -c /home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml &
[mvsusie_rss]
input: output_from("mixture_prior"), output_from("sumstat_merger"), output_from("mash_model")["resid_corr"]
name = "_".join(pd.DataFrame(input_inv)["Theme"].values.tolist())
parameter: analysis_unit = _input[1]
output: f'{wd:a}/fine_mapping/mvsusie_rss/{name}.mvsusie_rss.output_list.txt'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/fine_mapping/SuSiE/SuSiE_RSS.ipynb MvSuSiE_summary_stats_analysis \
--merged_analysis_unit $[analysis_unit] \
--resid_cor $[_input[2]] \
--prior $[_input[0]] \
--LD_Recipe /home/hs3163/GIT/ADSPFG-xQTL/MWE/LD_Recipe \
--container $[container_mvsusie] \
--wd $[wd:a]/fine_mapping/mvsusie_rss/ \
--Theme_prefix $[name] -J 200 -q csg -c $[yml]
nohup sos run /home/hs3163/GIT/xqtl-pipeline/pipeline/fine_mapping/SuSiE/SuSiE_RSS.ipynb MvSuSiE_summary_stats_analysis \
--merged_analysis_unit /mnt/mfs/statgen/xqtl_workflow_testing/ROSMAP/showcase_gene \
--resid_cor multivariate/MASH/AC_DLPFC_PCC.EZ.V_simple.rds \
--prior /mnt/mfs/statgen/xqtl_workflow_testing/ROSMAP/fine_mapping/mixture_prior/AC_DLPFC_PCC.ed_bovy.V_simple.rds \
--LD_Recipe /home/hs3163/GIT/ADSPFG-xQTL/MWE/LD_Recipe \
--container /mnt/mfs/statgen/containers/twas_latest.sif \
--wd /mnt/mfs/statgen/xqtl_workflow_testing/ROSMAP/fine_mapping/mvsusie_rss/ \
--Theme_prefix AC_DLPFC_PCC -J 200 -q csg -c /home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml -s build &
[unisusie_rss]
input: output_from("mixture_prior"), output_from("sumstat_merger"), output_from("mash_model")["resid_corr"]
name = "_".join(pd.DataFrame(input_inv)["Theme"].values.tolist())
parameter: analysis_unit = _input[1]
output: f'{wd:a}/fine_mapping/unisusie_rss/{name}.unisusie_rss.output_list.txt'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/fine_mapping/SuSiE/SuSiE_RSS.ipynb UniSuSiE_summary_stats_analysis \
--merged_analysis_unit $[analysis_unit] \
--LD_Recipe /home/hs3163/GIT/ADSPFG-xQTL/MWE/LD_Recipe \
--container $[container_mvsusie] \
--wd $[wd:a]/fine_mapping/unisusie_rss/ \
--Theme_prefix $[name]
sos run /home/hs3163/GIT/xqtl-pipeline/pipeline/multivariate/MASH/mashr.ipynb mash \
--cwd ./ \
--container /mnt/mfs/statgen/containers/xqtl_pipeline_sif/mvsusie.sif \
--output_prefix AC_DLPFC_PCC \
--data /mnt/mfs/statgen/xqtl_workflow_testing/ROSMAP/multivariate/MASH/Prep/AC_DLPFC_PCC.rds \
-J 200 -q csg -c /home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml &
[unisusie]
input: output_from("phenotype_reformatting_per_gene"),output_from("genotype_reformatting_per_gene"), group_with = "input_inv"
output: f'{wd:a}/fine_mapping/unisusie/{name}/{name}.unisusie.output_list.txt'
#task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/fine_mapping/SuSiE/SuSiE.ipynb UniSuSiE_summary_stats_analysis uni_susie \
--phenotype_list $[_input[0]] \
--genotype_list $[_input[1]] \
--container $[container_mvsusie] \
--region_list $[_input_inv["region_list"]] \
--name $[_input_inv["Theme"]] \
--wd $[wd:a]/fine_mapping/unisusie/$[name]/ \
-J 200 -q csg -c $[yml] &
###Output
_____no_output_____ |
kaggle/iris-dataset.ipynb | ###Markdown
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from numpy import genfromtxt
import os
!mkdir data #let us create data folder to hold our data
# install DenMune clustering algorithm using pip command from the offecial Python repository, PyPi
# from https://pypi.org/project/denmune/
!pip install denmune
# now import it
from denmune import DenMune
dataset = 'iris' # let us take iris dataset as an example
data_path = "/kaggle/input/denmune-datasets/"
output_path = "data/" # this is where any output will be saved to, i.e. 2-d version of N-D dataset
file_ext = ".txt"
data_file = data_path + dataset + file_ext # i.e. 'iris' + '.txt' ==> iris.txt
data = genfromtxt(data_file , delimiter='\t')
ground_ext = "-gt"
ground_file = data_path + dataset + ground_ext + file_ext
data_labels = genfromtxt(ground_file, delimiter='\t') # i.e. 'iris' + + '-gt + '.txt' ==> iris-gt.txt
data2d_ext = '-2d'
file_2d = output_path + dataset + data2d_ext + file_ext # 'iris' + '-2d' + '.txt' ==> iris-2d.txt
# Denmune's Paramaters
verpose_mode = True # view in-depth analysis of time complexity and outlier detection, num of clusters
show_groundtrugh = True # show plots on/off
show_noise = True # show noise and outlier on/off
knn = 11
dm = DenMune(data=data, file_2d=file_2d, k_nearest=knn, verpose=verpose_mode, show_noise=show_noise, rgn_tsne=False)
if show_groundtrugh:
# Let us plot the groundtruth of this dataset which is reduced to 2-d using t-SNE
print ("Dataset\'s Groundtruth")
dm.plot_clusters(labels=data_labels, ground=True)
print('\n', "=====" * 20 , '\n')
labels_pred = dm.fit_predict()
validity = dm.validate_Clusters(labels_true=data_labels, labels_pred=labels_pred)
dm.plot_clusters(labels=labels_pred, show_noise=show_noise)
validity_key = "F1"
# Acc=1, F1-score=2, NMI=3, AMI=4, ARI=5, Homogeneity=6, and Completeness=7
print ('k=' , knn, validity_key , 'score is:', round(validity[validity_key],3))
###Output
Dataset's Groundtruth
|
environment.ipynb | ###Markdown
Environment and Experiment InfoThis notebook contains environment and further experiment information.
###Code
import platform
print(platform.platform())
%%bash
lshw -short
%%bash
lscpu
%%bash
pip3 freeze
###Output
absl-py==0.12.0
alembic==1.4.1
anyio==3.0.0
appdirs==1.4.4
argon2-cffi==20.1.0
astroid==2.5.3
astunparse==1.6.3
async-generator==1.10
attrs==20.3.0
Babel==2.9.0
backcall==0.2.0
black==20.8b1
bleach==3.3.0
cachetools==4.2.1
certifi==2020.12.5
cffi==1.14.5
chardet==4.0.0
click==7.1.2
cloudpickle==1.6.0
cycler==0.10.0
databricks-cli==0.14.3
decorator==5.0.7
defusedxml==0.7.1
deprecation==2.1.0
dm-tree==0.1.6
docker==5.0.0
entrypoints==0.3
Flask==1.1.2
flatbuffers==1.12
gast==0.3.3
gitdb==4.0.7
GitPython==3.1.15
google-auth==1.29.0
google-auth-oauthlib==0.4.4
google-pasta==0.2.0
greenlet==1.0.0
grpcio==1.32.0
gunicorn==20.1.0
h5py==2.10.0
idna==2.10
invoke==1.5.0
ipykernel==5.5.3
ipython==7.22.0
ipython-genutils==0.2.0
isort==5.8.0
itsdangerous==1.1.0
jedi==0.18.0
Jinja2==2.11.3
joblib==1.0.1
json5==0.9.5
jsonschema==3.2.0
jupyter-client==6.1.12
jupyter-core==4.7.1
jupyter-packaging==0.9.2
jupyter-server==1.6.2
jupyterlab==3.0.14
jupyterlab-pygments==0.1.2
jupyterlab-server==2.4.0
Keras-Preprocessing==1.1.2
kiwisolver==1.3.1
lazy-object-proxy==1.6.0
Mako==1.1.4
Markdown==3.3.4
MarkupSafe==1.1.1
matplotlib==3.4.1
mccabe==0.6.1
mistune==0.8.4
mlflow==1.15.0
mypy==0.812
mypy-extensions==0.4.3
nbclassic==0.2.7
nbclient==0.5.3
nbconvert==6.0.7
nbformat==5.1.3
nest-asyncio==1.5.1
notebook==6.3.0
numpy==1.19.5
oauthlib==3.1.0
opt-einsum==3.3.0
packaging==20.9
pandas==1.2.4
pandocfilters==1.4.3
parso==0.8.2
pathspec==0.8.1
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.2.0
pkg-resources==0.0.0
prometheus-client==0.10.1
prometheus-flask-exporter==0.18.1
prompt-toolkit==3.0.18
protobuf==3.15.8
ptyprocess==0.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
Pygments==2.8.1
pylint==2.7.4
pyparsing==2.4.7
pyrsistent==0.17.3
python-dateutil==2.8.1
python-editor==1.0.4
pytz==2021.1
PyYAML==5.4.1
pyzmq==22.0.3
querystring-parser==1.2.4
regex==2021.4.4
requests==2.25.1
requests-oauthlib==1.3.0
rsa==4.7.2
scikit-learn==0.24.1
scipy==1.6.2
seaborn==0.11.1
Send2Trash==1.5.0
six==1.15.0
sklearn==0.0
smmap==4.0.0
sniffio==1.2.0
SQLAlchemy==1.4.10
sqlparse==0.4.1
tabulate==0.8.9
tensorboard==2.5.0
tensorboard-data-server==0.6.0
tensorboard-plugin-wit==1.8.0
tensorflow==2.4.1
tensorflow-addons==0.12.1
tensorflow-estimator==2.4.0
tensorflow-probability==0.12.2
termcolor==1.1.0
terminado==0.9.4
testpath==0.4.4
threadpoolctl==2.1.0
toml==0.10.2
tomlkit==0.7.0
tornado==6.1
tqdm==4.60.0
traitlets==5.0.5
typed-ast==1.4.3
typeguard==2.12.0
typing-extensions==3.7.4.3
urllib3==1.26.4
wcwidth==0.2.5
webencodings==0.5.1
websocket-client==0.58.0
Werkzeug==1.0.1
wrapt==1.12.1
###Markdown
Symbolic LearnersWe use ILASP-2i as the recommended version for non-noisy tasks and because at the time of the writing latest version, 4, returns unsatisfiable due to a bug. We use FastLAS version 3 beta since the published version, 1, does not run on our task due to bugs.
###Code
%%bash
ILASP
%%bash
FastLAS --version
%%bash
FastLAS --help
%%bash
clingo --help
###Output
clingo version 5.4.1
usage: clingo [number] [options] [files]
Clasp.Config Options:
--configuration=<arg> : Set default configuration [auto]
<arg>: {auto|frumpy|jumpy|tweety|handy|crafty|trendy|many|<file>}
auto : Select configuration based on problem type
frumpy: Use conservative defaults
jumpy : Use aggressive defaults
tweety: Use defaults geared towards asp problems
handy : Use defaults geared towards large problems
crafty: Use defaults geared towards crafted problems
trendy: Use defaults geared towards industrial problems
many : Use default portfolio to configure solver(s)
<file>: Use configuration file to configure solver(s)
--tester=<options> : Pass (quoted) string of <options> to tester
--stats[=<n>[,<t>]],-s : Enable {1=basic|2=full} statistics (<t> for tester)
--[no-]parse-ext : Enable extensions in non-aspif input
--[no-]parse-maxsat : Treat dimacs input as MaxSAT problem
Clasp.Solving Options:
--parallel-mode,-t <arg>: Run parallel search with given number of threads
<arg>: <n {1..64}>[,<mode {compete|split}>]
<n> : Number of threads to use in search
<mode>: Run competition or splitting based search [compete]
--enum-mode,-e <arg> : Configure enumeration algorithm [auto]
<arg>: {bt|record|brave|cautious|auto}
bt : Backtrack decision literals from solutions
record : Add nogoods for computed solutions
domRec : Add nogoods over true domain atoms
brave : Compute brave consequences (union of models)
cautious: Compute cautious consequences (intersection of models)
auto : Use bt for enumeration and record for optimization
--project[=<arg>|no] : Enable projective solution enumeration
<arg>: {show|project|auto}[,<bt {0..3}>] (Implicit: auto,3)
Project to atoms in show or project directives, or
select depending on the existence of a project directive
<bt> : Additional options for enumeration algorithm 'bt'
Use activity heuristic (1) when selecting backtracking literal
and/or progress saving (2) when retracting solution literals
--models,-n <n> : Compute at most <n> models (0 for all)
--opt-mode=<arg> : Configure optimization algorithm
<arg>: <mode {opt|enum|optN|ignore}>[,<bound>...]
opt : Find optimal model
enum : Find models with costs <= <bound>
optN : Find optimum, then enumerate optimal models
ignore: Ignore optimize statements
<bound> : Set initial bound for objective function(s)
Gringo Options:
--text : Print plain text format
--const,-c <id>=<term> : Replace term occurrences of <id> with <term>
Basic Options:
--help[=<n>],-h : Print {1=basic|2=more|3=full} help and exit
--version,-v : Print version information and exit
--verbose[=<n>],-V : Set verbosity level to <n>
--time-limit=<n> : Set time limit to <n> seconds (0=no limit)
--quiet[=<levels>],-q : Configure printing of models, costs, and calls
<levels>: <mod>[,<cost>][,<call>]
<mod> : print {0=all|1=last|2=no} models
<cost>: print {0=all|1=last|2=no} optimize values [<mod>]
<call>: print {0=all|1=last|2=no} call steps [2]
--pre[=<fmt>] : Print simplified program and exit
<fmt>: Set output format to {aspif|smodels} (implicit: aspif)
--mode=<arg> : Run in {clingo|clasp|gringo} mode
usage: clingo [number] [options] [files]
Default command-line:
clingo --configuration=auto --enum-mode=auto --verbose=1
Type 'clingo --help=2' for more options and defaults
and 'clingo --help=3' for all options and configurations.
clingo is part of Potassco: https://potassco.org/clingo
Get help/report bugs via : https://potassco.org/support
###Markdown
Eval from an example
###Code
from model import *
from data import *
import os
from tqdm import tqdm
import yaml
from utils import DotDict, adjust_learning_rate, accuracy
import torch
import traceback
os.environ['CUDA_VISIBLE_DEVICES'] = ''
config = 'configs/pointer_vocab_10k.yml'
with open(config, 'r') as f:
config = DotDict(yaml.safe_load(f))
print('started', config.name)
checkpoint_folder = os.path.join('checkpoints', config.name)
last_cpk = sorted(os.listdir(checkpoint_folder), key=lambda x: int(x[6:-4]), reverse=True)[0]
checkpoint_path = os.path.join(checkpoint_folder, last_cpk)
device = 'cpu'
data_val = MainDataset(
N_filename = config.data.N_filename,
T_filename = config.data.T_filename,
is_train=False,
truncate_size=config.data.truncate_size
)
test_loader = torch.utils.data.DataLoader(
data_val,
batch_size=config.train.batch_size,
shuffle=False,
num_workers=config.train.num_workers,
collate_fn=data_val.collate_fn
)
ignored_index = data_val.vocab_sizeT - 1
unk_index = data_val.vocab_sizeT - 2
model = MixtureAttention(
hidden_size = config.model.hidden_size,
vocab_sizeT = data_val.vocab_sizeT,
vocab_sizeN = data_val.vocab_sizeN,
attn_size = data_val.attn_size,
embedding_sizeT = config.model.embedding_sizeT,
embedding_sizeN = config.model.embedding_sizeN,
num_layers = 1,
dropout = config.model.dropout,
label_smoothing = config.model.label_smoothing,
pointer = config.model.pointer,
attn = config.model.attn,
device = device
)
cpk = torch.load(checkpoint_path, map_location=torch.device('cpu'))
model.load_state_dict(cpk['model'])
model = model.to(device)
train_dataN, test_dataN, vocab_sizeN, train_dataT, test_dataT, vocab_sizeT, attn_size, train_dataP, test_dataP = input_data(
config.data.N_filename, config.data.T_filename
)
f = open('pickle_data/terminal_dict_10k_PY.pickle', 'rb')
t_dict = pickle.load(f)
f.close()
len(t_dict['terminal_dict'])
t_reversed = {val: key for key, val in t_dict['terminal_dict'].items()}
def decode(arr):
return [t_reversed[item.item()] if item.item() < 10000 else item.item() - 10000 for item in arr]
sample_n = torch.tensor(test_dataN[5:6])
sample_t = torch.tensor(test_dataT[5:6])
sample_p = torch.tensor(test_dataP[5:6])
loss, ans = model(sample_n, sample_t, sample_p)
decode(sample_t[0])
decode(ans[0])
###Output
_____no_output_____
###Markdown
Google colab environment check
###Code
from shutil import which
from subprocess import check_output
def run(cmd):
return check_output(cmd.split(), text=True)
for item in ['pip', 'poetry', 'pipenv', 'python']:
path = which(item)
if path:
print(f"Path to {item} : {path}")
output = run(f"{path} --version")
print(output)
# Installed packages
run("pip list").split("\n")
run("uname -a")
run("cat /etc/issue").split("\n")
run("id")
run("lscpu").split("\n")
run("free -m").split("\n")
from subprocess import check_output
result = [item for item in check_output("env", text=True).split("\n") if item]
sorted(result, key = lambda x: x.split("=")[0] )
run("df -h").split("\n")
import psutil
def get_process_list_sorted_by_mem_usage():
processes = []
for proc in psutil.process_iter():
try:
data = proc.as_dict(attrs=['pid', 'name', 'username'])
data['vms'] = round(proc.memory_info().vms / (1024 * 1024), 3)
processes.append(data);
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess) as ex:
print(ex)
return sorted(processes, key=lambda x: x['vms'], reverse=True)
get_process_list_sorted_by_mem_usage()
###Output
_____no_output_____
###Markdown
Environment and Experiment InfoThis notebook contains environment and further experiment information.
###Code
import platform
print(platform.platform())
%%bash
lshw -short
%%bash
lscpu
%%bash
pip3 freeze
###Output
absl-py==0.10.0
appdirs==1.4.3
apturl==0.5.2
argon2-cffi==20.1.0
astroid==2.4.2
astunparse==1.6.3
async-generator==1.10
attrs==19.3.0
autobahn==17.10.1
Automat==0.8.0
backcall==0.2.0
bcrypt==3.1.7
beautifulsoup4==4.8.2
bleach==3.2.1
blinker==1.4
Brlapi==0.7.0
cachetools==4.1.1
cbor==1.0.0
ccsm==0.9.14.1
certifi==2020.6.20
cffi==1.14.3
chainer==7.7.0
chardet==3.0.4
chrome-gnome-shell==0.0.0
Click==7.0
colorama==0.4.3
command-not-found==0.3
compizconfig-python==0.9.14.1
configobj==5.0.6
constantly==15.1.0
coverage==5.3
cryptography==2.8
cupshelpers==1.0
cycler==0.10.0
Cython==0.29.14
dbus-python==1.2.16
decorator==4.4.2
defer==1.0.6
defusedxml==0.6.0
distlib==0.3.0
distro==1.4.0
distro-info===0.23ubuntu1
docutils==0.16
duplicity==0.8.12.0
entrypoints==0.3
fasteners==0.14.1
feedgenerator==1.9.1
filelock==3.0.12
Flask==1.1.2
future==0.18.2
gast==0.3.3
google-auth==1.21.3
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
gpg===1.13.1-unknown
grpcio==1.32.0
Guake==3.6.3
h5py==2.10.0
html5lib==1.0.1
httplib2==0.14.0
hyperlink==19.0.0
idna==2.8
importlib-metadata==1.5.0
incremental==16.10.1
invoke==1.4.1
ipdb==0.13.3
ipykernel==5.3.4
ipython==7.18.1
ipython-genutils==0.2.0
ipywidgets==7.5.1
isort==5.5.3
itsdangerous==1.1.0
jedi==0.17.2
Jinja2==2.11.2
joblib==0.16.0
json5==0.9.5
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==6.1.7
jupyter-console==6.2.0
jupyter-core==4.6.3
jupyterlab==2.2.8
jupyterlab-pygments==0.1.1
jupyterlab-server==1.2.0
Keras==2.4.3
Keras-Preprocessing==1.1.2
keyring==18.0.1
kiwisolver==1.0.1
language-selector==0.1
launchpadlib==1.10.13
lazr.restfulclient==0.14.2
lazr.uri==1.0.3
lazy-object-proxy==1.4.3
libvirt-python==6.1.0
lockfile==0.12.2
louis==3.12.0
lxml==4.5.0
lz4==3.0.2+dfsg
macaroonbakery==1.3.1
Mako==1.1.0
Markdown==3.2.2
MarkupSafe==1.1.0
matplotlib==3.3.2
mccabe==0.6.1
meld==3.20.2
mistune==0.8.4
monotonic==1.5
more-itertools==4.2.0
mpi4py==3.0.3
mpmath==1.1.0
mypy==0.782
mypy-extensions==0.4.3
nbclient==0.5.0
nbconvert==6.0.6
nbformat==5.0.7
nest-asyncio==1.4.1
netifaces==0.10.4
nltk==3.4.5
notebook==6.1.4
numpy==1.19.2
oauthlib==3.1.0
olefile==0.46
onboard==1.4.1
openshot-qt==2.4.3
opt-einsum==3.3.0
packaging==20.3
pandas==1.1.2
pandocfilters==1.4.2
paramiko==2.6.0
parso==0.7.1
pbr==5.4.5
pelican==4.5.0
pexpect==4.6.0
pickleshare==0.7.5
Pillow==7.0.0
prometheus-client==0.8.0
prompt-toolkit==3.0.7
protobuf==3.13.0
psutil==5.5.1
ptyprocess==0.6.0
py-ubjson==0.14.0
pyasn1==0.4.2
pyasn1-modules==0.2.1
PyAudio==0.2.11
pycairo==1.16.2
pycparser==2.20
pycrypto==2.6.1
pycups==1.9.73
Pygments==2.3.1
PyGObject==3.36.0
PyHamcrest==1.9.0
PyJWT==1.7.1
pylint==2.6.0
pymacaroons==0.13.0
PyNaCl==1.3.0
pyOpenSSL==19.0.0
pyparsing==2.4.6
pypng==0.0.20
PyQRCode==1.2.1
PyQt5==5.14.1
PyQtWebEngine==5.14.0
pyRFC3339==1.1
pyrsistent==0.15.5
python-apt==2.0.0+ubuntu0.20.4.1
python-dateutil==2.7.3
python-debian===0.1.36ubuntu1
python-snappy==0.5.3
PyTrie==0.2
pytz==2019.3
pyxdg==0.26
PyYAML==5.3.1
pyzmq==19.0.2
qtconsole==4.7.7
QtPy==1.9.0
reportlab==3.5.34
requests==2.24.0
requests-oauthlib==1.3.0
requests-unixsocket==0.2.0
rsa==4.6
rubber==1.5.1
scikit-learn==0.23.2
scipy==1.5.2
screen-resolution-extra==0.0.0
seaborn==0.11.0
SecretStorage==2.3.1
Send2Trash==1.5.0
service-identity==18.1.0
simplejson==3.16.0
sip==4.19.21
six==1.14.0
sklearn==0.0
soupsieve==1.9.5
stevedore==1.32.0
systemd-python==234
tensorboard==2.3.0
tensorboard-plugin-wit==1.7.0
tensorflow==2.3.1
tensorflow-estimator==2.3.0
termcolor==1.1.0
terminado==0.9.1
testpath==0.4.4
threadpoolctl==2.1.0
toml==0.10.1
torch==1.6.0
torchvision==0.7.0
tornado==6.0.4
tqdm==4.50.0
traitlets==5.0.4
Twisted==18.9.0
txaio==2.10.0
typed-ast==1.4.1
typing-extensions==3.7.4.3
u-msgpack-python==2.1
ubuntu-advantage-tools==20.3
ubuntu-drivers-common==0.0.0
ufw==0.36
unattended-upgrades==0.1
Unidecode==1.1.1
urllib3==1.25.8
usb-creator==0.3.7
vboxapi==1.0
virtualenv==20.0.17
virtualenv-clone==0.3.0
virtualenvwrapper==4.8.4
wadllib==1.3.3
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==1.0.1
widgetsnbextension==3.5.1
wrapt==1.12.1
wsaccel==0.6.2
xkit==0.0.0
zipp==1.0.0
zope.interface==4.7.1
###Markdown
Actions:- 0 (rotation), 1 (other rotation), 2 (move outwards), 3 (move inwards)
###Code
class Environment():
def __init__(self, field_classifier, reward_classifier, delta_measurement = .05, num_measurements = 10, color_on = True):
self.__env = System(brick_ip='ev3dev.local', get_state_mode='dict')
self.delta_measurement = delta_measurement
self.num_measurements = num_measurements
self.field_classifier = utils.load_pickle(field_classifier)
self.reward_classifier = utils.load_pickle(reward_classifier)
self.opposite_action = {0:1,1:0,2:3,3:2}
self.on_field = True
self.color_on = color_on
def reset(self):
# stop current action
self.__env.reset()
# Go to initial state
# return state
return self.prepro([self.state])
def go_to_init_state(self):
self.__env.go_to_init_state()
print('#'*30)
print('Going to Init')
print('#'*30)
time.sleep(5)
def step(self, action):
# give the action to the motors
self.__env.perform_actions([action])
state = []
done = False
# we will perform this action for
measurement = 0
border_count = 0
while measurement < self.num_measurements:
start = time.time()
time_arr = []
# Get the current state
s = self.get_state()
state.append(s)
start_1 = time.time()
time_arr.append(start_1-start)
measurement += 1
#Sleep a bit so next time we get a different state
time.sleep(self.delta_measurement)
start_2 = time.time()
time_arr.append(start_2-start_1)
# A check whether we are still in the field
if self.color_on:
if self.field_classifier.predict(s['raw_col']) == [0]:
print('I am outside')
border_count += 1
if self.on_field:
self.__env.perform_actions([self.opposite_action[action]])
print('BOUNCIN!!1')
time.sleep(1)
self.on_field = False
else:
self.on_field = True
if border_count ==3:
self.go_to_init_state()
border_count = 0
time_arr.append(time.time()-start_2)
# Stop the actions
self.__env.stop()
# Calculate the intermediate reward
if self.color_on:
reward = self.calculate_reward(state)
else:
reward = 0
return state[-1]['index'], reward, done, {}
def calculate_reward(self, state):
# Predict propba
if not self.on_field:
return -20
weights = np.ones(shape=(self.num_measurements,))
weights = [weight * i for i, weight in enumerate(weights)]
x = np.array([s['raw_col'] for s in state]).squeeze()
# r = (np.argmax(self.reward_classifier.predict_proba(x), axis = 1) == 1).sum()
# sum the probabilities of black class and compute a function of it
black_proba = self.reward_classifier.predict_proba(x)[:,1]
black_proba_weighted = [weight * p for weight, p in zip(weights, black_proba)]
black_threshold = 0.3
r = np.max([0, (np.sum(black_proba_weighted)-(black_threshold*self.num_measurements)) * 5])
return r
def prepro(self,state):
# Deprecate this shit, preprocessing will be done in retrieving the get_state.
s = state[-1]
if self.color_on:
x = (s['cs'][0][0]//10,s['cs'][0][1]//10,s['cs'][0][2]//10, s['bot'][0]//36, s['top'][0]//36)
else:
x = (s['bot'][0]//36, s['top'][0]//36)
return x
def get_state(self):
s_1 = self.state
s_2 = self.state
s_2['bot'] = s_2['bot'][0]//36
s_2['top'] = s_2['top'][0]//36
if self.color_on:
col = np.r_[s_1['cs'][0],s_2['cs'][0]]
col_ind = col//3
s = {'index': (*tuple(col_ind),s_2['bot'], s_2['top']), 'raw_col' : np.array([col])}
else:
s = (s_2['bot'], s_2['top'])
return s
@property
def state(self):
return self.__env.get_state()
@property
def action_space(self):
return len(self.__env.get_action_space()[0])
sys = System(get_state_mode = 'dict')
sys.perform_actions([0,0])
#sys.stop()
sys.stop()
plt.plot(np.array(arr))
env = Environment('./mlp_on_off.pickle','./mlp_white_black.pickle', delta_measurement= 0.0, num_measurements = 3)
env.reset()
env.get_state()
num_episodes = 30
# Make an Agent
q_table = T_Agent(4, learn_rate = .8, gamma =.95)
#create lists to contain total rewards and steps per episode
env.reset()
rewards = []
for i in range(num_episodes):
# Decay the exploration
q_table.explore_decay = i
s = env.go_to_init_state()
rAll = 0
d = False
j = 0
#The Q-Table learning algorithm
while j < 99:
j+=1
#Choose an action by greedily (with noise) picking from Q table
a = q_table.next_action(s)
print('Action',a)
#Get new state and reward from environment
s1,r,d,_ = env.step(a)
print('\r ', r)
#Update Q-Table with new knowledge
q_table.update(r, s1)
rAll += r
s = s1
if d == True:
break
rewards.append(rAll)
print('#'*10, 'End Episode', '#'*10)
print("Average score over last part " + str(sum(rewards[-500:])/500))
start = time.time()
print(env.state)
print(time.time()- start)
env.reset()
q_table.val_table.shape
###Output
_____no_output_____ |
Problem 040 - Champernowne's constant.ipynb | ###Markdown
An irrational decimal fraction is created by concatenating the positive integers: 0.123456789101112131415161718192021...It can be seen that the 12th digit of the fractional part is 1.If dn represents the nth digit of the fractional part, find the value of the following expression. d1 × d10 × d100 × d1000 × d10000 × d100000 × d1000000
###Code
open System.Text
let limit = 1000000
let appendChampernowne (sb:StringBuilder) n =
sb.Append (string n) |> ignore
n
let rec buildChamp' (sb:StringBuilder) n =
if (sb.Length) > limit then (sb.ToString())
else buildChamp' sb (appendChampernowne sb (1 + n))
let buildChamp n =
buildChamp' (new StringBuilder()) n
let champernowne = (buildChamp 0).ToCharArray()
[1; 10; 100; 1000; 10000; 100000; 1000000;]
|> List.map (fun i -> champernowne.[i-1])
|> List.map (string >> int)
|> List.fold (fun acc element -> acc * element) 1
###Output
_____no_output_____ |
1_Design_of_Experiments.ipynb | ###Markdown
Design of experiments> *The term experiment is defined as the systematic procedure carried out under controlled conditions in order to discover an unknown effect, to test or establish a hypothesis, or to illustrate a known effect. When analyzing a process, experiments are often used to evaluate which process inputs have a significant impact on the process output, and what the target level of those inputs should be to achieve a desired result (output).* ***Cake-baking process example**[* https://www.moresteam.com/toolbox/design-of-experiments.cfm](https://www.moresteam.com/toolbox/design-of-experiments.cfm)---
###Code
# Install dependencies
!pip install pyDOE
# Mount Google Drive folder and create PML folder
from google.colab import drive
drive.mount('/content/drive/')
%mkdir -p /content/drive/My\ Drive/PML
# Import all dependencies
from pyDOE import lhs
from scipy.stats.distributions import norm
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Parameters to Adjust
NUMBER_INPUTS = 2
NUMBER_SAMPLES = 10
# Boundaries for each input (LOWER, UPPER)
INPUTS_BOUNDARIES = [
[2, 6],
[0.75, 3]
]
# Generate DoE data (lhs - Latin-hypercube)
doe = lhs(NUMBER_INPUTS, samples=NUMBER_SAMPLES, criterion='maximin', iterations=10000)
plt.plot(doe[:,0],doe[:,1], '.k')
plt.show()
# Scale DoE data to Input Boundaries
scaledDoe = doe.copy()
for i in range(NUMBER_INPUTS):
scaledDoe[:,i] = scaledDoe[:,i]*(INPUTS_BOUNDARIES[i][1]-INPUTS_BOUNDARIES[i][0]) + INPUTS_BOUNDARIES[i][0]
plt.plot(scaledDoe[:,0],scaledDoe[:,1], '.k')
plt.show()
# Create DataFrame table with scaled DoE data
df = pd.DataFrame(scaledDoe)
# Name input cols as x1,x2.. and add empty output col y
df.columns = [ "x{}".format(i+1) for i in range(NUMBER_INPUTS)]
df['y']=''
# Preview data table
df
# Save DoE data to Google Drive PML folder
df.to_excel('drive/My Drive/PML/doe.xlsx', index=False)
###Output
_____no_output_____ |
CRM/Customer Churn Prediction/notebook.ipynb | ###Markdown
1. Business UnderstandingCustomer churn is a customer's decision to stop purchasing a particular company service. It thus represents the counterpart to long-term customer loyalty. In order to promote customer loyalty, companies must use analyzes that recognize at an early stage whether a customer wants to leave the company. This enables marketing and sales measures to be initiated before the actual loss of customers. In this context, the service specifically answers these two questions: What is the probability that historical data will be used to predict whether a customer will migrate to another provider? Which factors lead to customer churn? 2. Data and Data UnderstandingThe data record of a fictitious telecommunications company is used to visualize and implement the service. This consists of 7,043 lines. Each line describes a customer with 21 columns. Each column defines different characteristics (attributes) of the customers. Based on the data, it should be classified whether a customer leaves the company or not. For this purpose, the historical data contain the target variable “Churn”, which provides information on whether a customer has churned or not. 2.1. Import of Relevant Modules
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
import warnings
import imblearn
from statsmodels.stats.outliers_influence import variance_inflation_factor
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from imblearn.under_sampling import InstanceHardnessThreshold
from sklearn import metrics
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.model_selection import train_test_split
sns.set()
# remove warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
2.2. Read Data
###Code
data_raw = pd.read_csv("https://storage.googleapis.com/ml-service-repository-datastorage/Customer_Churn_Prediction_data.csv")
data_raw.head()
data_raw.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 7043 entries, 0 to 7042
Data columns (total 21 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 customerID 7043 non-null object
1 gender 7043 non-null object
2 SeniorCitizen 7043 non-null int64
3 Partner 7043 non-null object
4 Dependents 7043 non-null object
5 tenure 7043 non-null int64
6 PhoneService 7043 non-null object
7 MultipleLines 7043 non-null object
8 InternetService 7043 non-null object
9 OnlineSecurity 7043 non-null object
10 OnlineBackup 7043 non-null object
11 DeviceProtection 7043 non-null object
12 TechSupport 7043 non-null object
13 StreamingTV 7043 non-null object
14 StreamingMovies 7043 non-null object
15 Contract 7043 non-null object
16 PaperlessBilling 7043 non-null object
17 PaymentMethod 7043 non-null object
18 MonthlyCharges 7043 non-null float64
19 TotalCharges 7043 non-null object
20 Churn 7043 non-null object
dtypes: float64(1), int64(2), object(18)
memory usage: 1.1+ MB
###Markdown
The data set consists of 7,043 lines and 21 attributes:- Attribute to be predicted: Churn- Numeric attributes: Tenure, MonthlyCharges and TotalCharges.- Categorical attributes: CustomerID, Gender, SeniorCitizen, Partner, Dependents, PhoneService, MultipleLines, InternetService, OnlineSecurity, OnlineBackup, DeviceProtection, TechSupport, StreamingTV, StreamingMovies, Contract, PaperlessBilling, PaymentMethod.Not all data types were read in correctly:- TotalCharges must be a numerical value -> convert to float
###Code
# test for duplicates
data_raw[data_raw.duplicated(keep=False)]
###Output
_____no_output_____
###Markdown
No duplicates in data frame 2.3. Data Cleaning The first read errors should be corrected here, before the actual data preparation.
###Code
# convert total charges
data_raw['TotalCharges'] = pd.to_numeric(data_raw['TotalCharges'], errors='coerce')
data_raw.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 7043 entries, 0 to 7042
Data columns (total 21 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 customerID 7043 non-null object
1 gender 7043 non-null object
2 SeniorCitizen 7043 non-null int64
3 Partner 7043 non-null object
4 Dependents 7043 non-null object
5 tenure 7043 non-null int64
6 PhoneService 7043 non-null object
7 MultipleLines 7043 non-null object
8 InternetService 7043 non-null object
9 OnlineSecurity 7043 non-null object
10 OnlineBackup 7043 non-null object
11 DeviceProtection 7043 non-null object
12 TechSupport 7043 non-null object
13 StreamingTV 7043 non-null object
14 StreamingMovies 7043 non-null object
15 Contract 7043 non-null object
16 PaperlessBilling 7043 non-null object
17 PaymentMethod 7043 non-null object
18 MonthlyCharges 7043 non-null float64
19 TotalCharges 7032 non-null float64
20 Churn 7043 non-null object
dtypes: float64(2), int64(2), object(17)
memory usage: 1.1+ MB
###Markdown
The conversion of the TotalCharges has resulted in zero values. These zero values must be corrected.
###Code
# Remove zero values
# axis = 0 rows / axis = 1 columns
data_no_mv = data_raw.dropna(axis=0)
###Output
_____no_output_____
###Markdown
2.4. Descriptive Analytics In this part of the notebook, data understanding is to be taken into account with the help of descriptive analytics. After removing the zero values, the data record consists of 7032 rows, one of which each describes a customer, and 21 columns that define the customer's attributes. With the help of this data, an attempt should be made to classify whether a customer leaves or not. For this purpose, the historical data contain the target variable “Churn”, which provides information on whether a customer has churned. 2.4.1. Continous Features First, the distributions of the continous features are examined individually and in a second step the categorical features are set in connection with the target variable.
###Code
# load continous features
numeric_data = data_no_mv.select_dtypes(include=[np.number])
###Output
_____no_output_____
###Markdown
Tenure
###Code
sns.displot(numeric_data["tenure"])
###Output
_____no_output_____
###Markdown
- No normal distribution recognizable.- No outliers recognizable.- Customers are potentially evenly distributed over the individual months, but a large number of customers have not long been part of the company.
###Code
sns.distplot(data_no_mv[data_no_mv.Churn == 'No']["tenure"],
bins=10,
color='orange',
label='Non-Churn',
kde=True)
sns.distplot(data_no_mv[data_no_mv.Churn == 'Yes']["tenure"],
bins=10,
color='blue',
label='Churn',
kde=True)
###Output
_____no_output_____
###Markdown
Customers who have not been with the company for long are more likely to migrate than long-term customers. Monthly Charges
###Code
sns.distplot(numeric_data["MonthlyCharges"])
###Output
_____no_output_____
###Markdown
- No normal distribution recognizable.- Most of the customers are in the front of the distribution and pay relatively low monthly fees.- Nevertheless, the curve runs evenly with a renewed increase backwards and accordingly no outliers can be identified.
###Code
sns.distplot(data_no_mv[data_no_mv.Churn == 'No']["MonthlyCharges"],
bins=10,
color='orange',
label='Non-Churn',
kde=True)
sns.distplot(data_no_mv[data_no_mv.Churn == 'Yes']["MonthlyCharges"],
bins=10,
color='blue',
label='Churn',
kde=True)
###Output
_____no_output_____
###Markdown
- Customers with low monthly fees are more likely to churn.- Churn trend between customers who are churning and customers who are not churning becomes nearly the same as monthly fees increase. Total Charges
###Code
sns.distplot(numeric_data["TotalCharges"])
###Output
_____no_output_____
###Markdown
- The curve flattens extremely strongly towards the rear.- Similarities to the exponential distribution can be seen. -> Test of the logarithmic transformation to achieve a normal distribution.- It is questionable whether there are outliers in the rear part. -> box plot
###Code
# Boxplot für TotalCharges erstellen, um sicherzustellen, dass keine Ausreißer vorhanden sind.
plt.boxplot(numeric_data["TotalCharges"])
plt.show()
###Output
_____no_output_____
###Markdown
- box plot shows no outliers.- This means that no outliers can be identified for total charges either.
###Code
# logarithmic transformation
log_charges = np.log(data_no_mv["TotalCharges"])
sns.distplot(log_charges)
###Output
_____no_output_____
###Markdown
- Even the transformation with the help of the logarithm does not produce a normal distribution.- Before further transformations, the correlation with other variables should first be examined.
###Code
sns.distplot(data_no_mv[data_no_mv.Churn == 'No']["TotalCharges"],
bins=10,
color='orange',
label='Non-Churn',
kde=True)
sns.distplot(data_no_mv[data_no_mv.Churn == 'Yes']["TotalCharges"],
bins=10,
color='blue',
label='Churn',
kde=True)
###Output
_____no_output_____
###Markdown
The distribution is almost identical across the entire range of costs for both customers who are churning and customers who are not churning. Correlation Analysis
###Code
# correlation between continous features
feature_corr = numeric_data.drop("SeniorCitizen", axis=1).corr()
sns.heatmap(feature_corr, annot=True, cmap='coolwarm')
###Output
_____no_output_____
###Markdown
The correlation matrix shows that the attributes "tenure" and "TotalCharges" have a critical positive correlation of over 0.8. This relationship will be re-examined later in the context of multicollinearity and must be removed. Scatterplots with Continous Features and Target
###Code
sns.scatterplot(data=data_no_mv, x="tenure", y="MonthlyCharges", hue="Churn")
###Output
_____no_output_____
###Markdown
The scatter plot suggests that customers in the upper left area, i.e. customers with high monthly costs and short periods of employment with the company, are most likely to churn.
###Code
sns.scatterplot(data=data_no_mv, x="tenure", y="TotalCharges", hue="Churn")
###Output
_____no_output_____
###Markdown
There is a purely logical, linear relationship between length of service and the total costs billed. The longer a person has been a customer, the more monthly amounts he has already had to pay. 2.4.2. Categorical Features Churn (Target) First, the distribution of the target variable churn is examined.
###Code
# produce pie chart for churn
# generate procentage relationship
churn_rate = data_no_mv.Churn.value_counts() / len(data_no_mv.Churn)
# Plot
labels = 'Keine Abwanderung', 'Abwanderung'
fig, ax = plt.subplots()
ax.pie(churn_rate, labels=labels, autopct='%.f%%')
ax.set_title('Abwanderung im Vergleich zur Nicht-Abwanderung')
###Output
_____no_output_____
###Markdown
- Churns correspond to around 27% of the total data set, while non-churns correspond to around 73%.- This is an unbalanced data set and another metric must be used in the evaluation phase. Gender
###Code
sns.countplot(x="gender", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
The churn rate between male and female is approximately the same. Senior Citizen
###Code
sns.countplot(x="SeniorCitizen", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
Customers classified as seniors are more likely to migrate. Partner
###Code
sns.countplot(x="Partner", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
Customers who do not have a partner are more likely to migrate. Dependents
###Code
sns.countplot(x="Dependents", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
Customers who have relatives are more likely to migrate. Multiple Lines
###Code
sns.countplot(x="MultipleLines", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
Customers who have multiple connections are less likely to migrate. Internet Service
###Code
sns.countplot(x="InternetService", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
If a customer has a fiber optic connection, he is more likely to drop out than a customer with DSL. Online Security
###Code
sns.countplot(x="OnlineSecurity", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
Customers who do not use the Internet security service are more likely to migrate. Online Backup
###Code
sns.countplot(x="OnlineBackup", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
People who do not use online backup are more likely to migrate. Device Protection
###Code
sns.countplot(x="DeviceProtection", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
Customers who have not purchased additional device protection are more likely to migrate. Tech Support
###Code
sns.countplot(x="TechSupport", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
Customers who do not use tech support are more likely to migrate. Streaming TV/ Streaming Movies
###Code
for col in ["StreamingTV", "StreamingMovies"]:
sns.countplot(x=col, hue='Churn', data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
The addition of film and TV streaming offers has hardly any effect on the churn rate. Paperless Billing
###Code
sns.countplot(x="PaperlessBilling", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
Customers who pay without an invoice are more likely to migrate. Payment Method
###Code
sns.countplot(x="PaymentMethod", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
Customers who pay using electronic checks migrate significantly more often than customers who use a different payment method. Contract
###Code
sns.countplot(x="Contract", hue="Churn", data=data_no_mv)
plt.show()
###Output
_____no_output_____
###Markdown
Customers with short-term commitments are more likely to leave than customers with longer-term contracts. 3. Data Preparation 3.1. Reduce Customer ID
###Code
# Removing the Customer ID, it does not add value to the model
data_prep = data_no_mv.drop("customerID", axis = 1)
###Output
_____no_output_____
###Markdown
3.2. Recoding of Categorical Variables
###Code
# Convert binary variables to 1 and 0 with Yes and No
bin_var = ["Partner","Dependents","PhoneService","PaperlessBilling","Churn"]
def binaer_umwandeln(x):
return x.map({'Yes':1,'No':0})
data_prep[bin_var]=data_prep[bin_var].apply(binaer_umwandeln)
data_prep.head()
# create dummies
data_enc = pd.get_dummies(data_prep, drop_first=True)
data_enc.head()
# Dropping of dummies that also contain No phone service and No Internet service
dup_variables = ["OnlineSecurity_No internet service","OnlineBackup_No internet service", "TechSupport_No internet service","StreamingTV_No internet service","StreamingMovies_No internet service", "DeviceProtection_No internet service","MultipleLines_No phone service"]
data_enc.drop(dup_variables, axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
3.3. Test for Multicollinearity In order to ensure correct operation of the later regression, there must be no multicollinearity between the variables. The presence of the same is checked with the help of the library Statsmodel.
###Code
# independent variables
vif_test = data_enc.drop("Churn", axis=1)
# VIF dataframe
vif_data = pd.DataFrame()
vif_data["feature"] = vif_test.columns
# VIF for each Feature
vif_data["VIF"] = [variance_inflation_factor(vif_test.values, i)
for i in range(len(vif_test.columns))]
print(vif_data)
###Output
feature VIF
0 SeniorCitizen 1.376564
1 Partner 2.824725
2 Dependents 1.969391
3 tenure 20.482153
4 PhoneService 47.244378
5 PaperlessBilling 2.956951
6 MonthlyCharges 212.353073
7 TotalCharges 21.374002
8 gender_Male 2.021331
9 MultipleLines_Yes 2.861614
10 InternetService_Fiber optic 17.695260
11 InternetService_No 8.234451
12 OnlineSecurity_Yes 2.682712
13 OnlineBackup_Yes 2.909898
14 DeviceProtection_Yes 2.992570
15 TechSupport_Yes 2.758343
16 StreamingTV_Yes 4.928957
17 StreamingMovies_Yes 5.090603
18 Contract_One year 2.056188
19 Contract_Two year 3.487502
20 PaymentMethod_Credit card (automatic) 1.984196
21 PaymentMethod_Electronic check 2.955994
22 PaymentMethod_Mailed check 2.383290
###Markdown
"MonthlyCharges" has the highest VIF and is removed from the dataset.
###Code
data_enc.drop("MonthlyCharges", axis=1, inplace=True)
# the independent variables set
vif_test = data_enc.drop("Churn", axis=1)
# VIF dataframe
vif_data = pd.DataFrame()
vif_data["feature"] = vif_test.columns
# VIF for each Feature
vif_data["VIF"] = [variance_inflation_factor(vif_test.values, i)
for i in range(len(vif_test.columns))]
print(vif_data)
###Output
feature VIF
0 SeniorCitizen 1.366018
1 Partner 2.817414
2 Dependents 1.961947
3 tenure 17.073930
4 PhoneService 9.277446
5 PaperlessBilling 2.796488
6 TotalCharges 18.028499
7 gender_Male 1.942509
8 MultipleLines_Yes 2.514269
9 InternetService_Fiber optic 4.186492
10 InternetService_No 3.473225
11 OnlineSecurity_Yes 1.986701
12 OnlineBackup_Yes 2.182678
13 DeviceProtection_Yes 2.299462
14 TechSupport_Yes 2.099655
15 StreamingTV_Yes 2.749724
16 StreamingMovies_Yes 2.771330
17 Contract_One year 2.056169
18 Contract_Two year 3.468149
19 PaymentMethod_Credit card (automatic) 1.820729
20 PaymentMethod_Electronic check 2.535918
21 PaymentMethod_Mailed check 1.982063
###Markdown
"TotalCharges" has the highest VIF and is removed from the dataset.
###Code
data_enc.drop("TotalCharges", axis=1, inplace=True)
# the independent variables set
vif_test = data_enc.drop("Churn", axis=1)
# VIF dataframe
vif_data = pd.DataFrame()
vif_data["feature"] = vif_test.columns
# calculating VIF for each feature
vif_data["VIF"] = [variance_inflation_factor(vif_test.values, i)
for i in range(len(vif_test.columns))]
print(vif_data)
###Output
feature VIF
0 SeniorCitizen 1.363244
1 Partner 2.816895
2 Dependents 1.956413
3 tenure 7.530356
4 PhoneService 9.260839
5 PaperlessBilling 2.757816
6 gender_Male 1.931277
7 MultipleLines_Yes 2.426699
8 InternetService_Fiber optic 3.581328
9 InternetService_No 3.321342
10 OnlineSecurity_Yes 1.947904
11 OnlineBackup_Yes 2.093763
12 DeviceProtection_Yes 2.241375
13 TechSupport_Yes 2.060410
14 StreamingTV_Yes 2.636855
15 StreamingMovies_Yes 2.661529
16 Contract_One year 2.055971
17 Contract_Two year 3.456061
18 PaymentMethod_Credit card (automatic) 1.794059
19 PaymentMethod_Electronic check 2.401970
20 PaymentMethod_Mailed check 1.967082
###Markdown
None of the variables now has a VIF greater than 10. 3.4. Feature Scaling
###Code
# Separate target variable and predictors
y = data_enc["Churn"]
X = data_enc.drop(labels = ["Churn"], axis = 1)
# Scaling the variables
num_features = ['tenure']
scaler = StandardScaler()
X[num_features] = scaler.fit_transform(X[num_features])
X.head()
###Output
_____no_output_____
###Markdown
3.5. Undersampling
###Code
iht = InstanceHardnessThreshold(random_state=0,estimator=LogisticRegression (solver='lbfgs', multi_class='auto'))
X_resampled, y_resampled = iht.fit_resample(X, y)
###Output
_____no_output_____
###Markdown
3.6. Create Test and Training Data
###Code
# Split dataset in train and test datasets
# The default value of 80% to 20% is used.
X_train, X_test, y_train, y_test = train_test_split(X_resampled, y_resampled, random_state=110)
###Output
_____no_output_____
###Markdown
4. Modelling and Evaluation 4.1. Logistic Regression Logistic regression is used to solve the problem. The two libraries Statsmodels and Scikit-Learn are used for this. The complete evaluation of the model takes place only in the subchapter to Scikit-Learn. Statsmodels Training and Prediction
###Code
# add constant
X_const = sm.add_constant(X_train)
# create model
log_reg = sm.Logit(y_train, X_const).fit()
print(log_reg.summary())
###Output
Optimization terminated successfully.
Current function value: 0.082006
Iterations 11
Logit Regression Results
==============================================================================
Dep. Variable: Churn No. Observations: 2803
Model: Logit Df Residuals: 2781
Method: MLE Df Model: 21
Date: Thu, 21 Oct 2021 Pseudo R-squ.: 0.8817
Time: 15:00:28 Log-Likelihood: -229.86
converged: True LL-Null: -1942.4
Covariance Type: nonrobust LLR p-value: 0.000
=========================================================================================================
coef std err z P>|z| [0.025 0.975]
---------------------------------------------------------------------------------------------------------
const 5.1912 0.828 6.266 0.000 3.567 6.815
SeniorCitizen 0.4609 0.457 1.008 0.313 -0.435 1.357
Partner -0.4112 0.302 -1.362 0.173 -1.003 0.181
Dependents -0.5746 0.294 -1.952 0.051 -1.151 0.002
tenure -2.9281 0.309 -9.468 0.000 -3.534 -2.322
PhoneService -1.2307 0.544 -2.261 0.024 -2.298 -0.164
PaperlessBilling 1.2621 0.288 4.385 0.000 0.698 1.826
gender_Male -0.1334 0.255 -0.524 0.600 -0.633 0.366
MultipleLines_Yes 1.0865 0.336 3.231 0.001 0.427 1.746
InternetService_Fiber optic 3.1681 0.400 7.916 0.000 2.384 3.952
InternetService_No -2.8314 0.567 -4.992 0.000 -3.943 -1.720
OnlineSecurity_Yes -1.7901 0.321 -5.581 0.000 -2.419 -1.161
OnlineBackup_Yes -0.3203 0.309 -1.036 0.300 -0.926 0.286
DeviceProtection_Yes 0.4336 0.331 1.312 0.190 -0.214 1.082
TechSupport_Yes -0.8710 0.329 -2.648 0.008 -1.516 -0.226
StreamingTV_Yes 1.1971 0.351 3.414 0.001 0.510 1.884
StreamingMovies_Yes 1.4263 0.374 3.815 0.000 0.693 2.159
Contract_One year -3.5720 0.488 -7.317 0.000 -4.529 -2.615
Contract_Two year -6.5206 0.584 -11.164 0.000 -7.665 -5.376
PaymentMethod_Credit card (automatic) -0.0720 0.313 -0.230 0.818 -0.686 0.542
PaymentMethod_Electronic check 1.2794 0.406 3.154 0.002 0.484 2.075
PaymentMethod_Mailed check -0.3240 0.398 -0.813 0.416 -1.105 0.457
=========================================================================================================
Possibly complete quasi-separation: A fraction 0.37 of observations can be
perfectly predicted. This might indicate that there is complete
quasi-separation. In this case some parameters will not be identified.
###Markdown
The trained model shows statistically non-significant variables. This is given if the value P>|z| is greater than 0.05 and it is not the constant.
###Code
# Removing the statistically non-significant features (P>|z|> 0.05)
insignificant_features = ["Partner", "gender_Male", "OnlineBackup_Yes", "DeviceProtection_Yes", "PaymentMethod_Credit card (automatic)","PaymentMethod_Mailed check"]
X_train.drop(insignificant_features, axis=1, inplace=True)
X_test.drop(insignificant_features, axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Nun kann ein zweites Modell erstellt werden:
###Code
# new model
X_const = sm.add_constant(X_train)
log_reg2 = sm.Logit(y_train, X_const).fit()
print(log_reg2.summary())
###Output
Optimization terminated successfully.
Current function value: 0.083077
Iterations 11
Logit Regression Results
==============================================================================
Dep. Variable: Churn No. Observations: 2803
Model: Logit Df Residuals: 2787
Method: MLE Df Model: 15
Date: Thu, 21 Oct 2021 Pseudo R-squ.: 0.8801
Time: 15:00:28 Log-Likelihood: -232.87
converged: True LL-Null: -1942.4
Covariance Type: nonrobust LLR p-value: 0.000
==================================================================================================
coef std err z P>|z| [0.025 0.975]
--------------------------------------------------------------------------------------------------
const 4.7119 0.718 6.566 0.000 3.305 6.118
SeniorCitizen 0.3954 0.458 0.864 0.387 -0.501 1.292
Dependents -0.7328 0.262 -2.797 0.005 -1.246 -0.219
tenure -2.9242 0.297 -9.845 0.000 -3.506 -2.342
PhoneService -1.2073 0.540 -2.235 0.025 -2.266 -0.149
PaperlessBilling 1.2161 0.285 4.273 0.000 0.658 1.774
MultipleLines_Yes 1.0989 0.331 3.320 0.001 0.450 1.748
InternetService_Fiber optic 3.1159 0.391 7.966 0.000 2.349 3.883
InternetService_No -2.8462 0.529 -5.381 0.000 -3.883 -1.809
OnlineSecurity_Yes -1.7441 0.313 -5.576 0.000 -2.357 -1.131
TechSupport_Yes -0.8357 0.325 -2.569 0.010 -1.473 -0.198
StreamingTV_Yes 1.2193 0.348 3.508 0.000 0.538 1.901
StreamingMovies_Yes 1.4394 0.368 3.908 0.000 0.717 2.161
Contract_One year -3.4572 0.471 -7.337 0.000 -4.381 -2.534
Contract_Two year -6.3299 0.557 -11.372 0.000 -7.421 -5.239
PaymentMethod_Electronic check 1.3103 0.362 3.623 0.000 0.601 2.019
==================================================================================================
Possibly complete quasi-separation: A fraction 0.36 of observations can be
perfectly predicted. This might indicate that there is complete
quasi-separation. In this case some parameters will not be identified.
###Markdown
No more statistically insignificant variables. The final model was modeled:
###Code
# final model
X_const = sm.add_constant(X_train)
log_reg_final = sm.Logit(y_train, X_const).fit()
print(log_reg_final.summary())
# prediction
y_hat = log_reg_final.predict(sm.add_constant(X_test))
# Statsmodel only gives the probabilities, therefore rounding is required.
prediction = list(map(round, y_hat))
###Output
_____no_output_____
###Markdown
4.1. Evaluation Zur Evaluation sollen mehrere Metriken verwendet werden, die komfortabler mittels Scikit-Learn erzeugt werden können. Deshalb wird das identische Modell wie mit Statsmodels nochmals in Scikit-Learn erzeugt. Scikit-Learn Training and Prediction
###Code
# C is needed to build the exact same model as with Statsmodels; source: https://www.kdnuggets.com/2016/06/regularization-logistic-regression.html
logistic_model = LogisticRegression(random_state=0, C=1e8)
# prediction with testdata
result = logistic_model.fit(X_train,y_train)
prediction_test = logistic_model.predict(X_test)
prediction_train = logistic_model.predict(X_train)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
# Accuracy Score
acc = metrics.accuracy_score(y_test, prediction_test)
print('Accuracy with testdata: {}'.format(acc))
###Output
Accuracy with testdata: 0.9882352941176471
###Markdown
The Accuracy suggests an above average model. However, it is an unbalanced data set. Therefore, further metrics have to be analyzed.
###Code
# classification report
print("traindata:")
print(classification_report(y_train,prediction_train))
print("testdata:")
print(classification_report(y_test,prediction_test))
###Output
traindata:
precision recall f1-score support
0 0.96 1.00 0.98 1374
1 1.00 0.96 0.98 1429
accuracy 0.98 2803
macro avg 0.98 0.98 0.98 2803
weighted avg 0.98 0.98 0.98 2803
testdata:
precision recall f1-score support
0 0.98 1.00 0.99 495
1 1.00 0.98 0.99 440
accuracy 0.99 935
macro avg 0.99 0.99 0.99 935
weighted avg 0.99 0.99 0.99 935
###Markdown
Higher accuracy for training than for the test data set. Overall, the values for the test and training data sets are very similar. Therefore, overfitting or underfitting should not be assumed.
###Code
# Confusion matrix testdata
cm = confusion_matrix(y_test,prediction_test)
df_cm = pd.DataFrame(cm, index=['No Churn','Churn'], columns=['No Churn', 'Churn'],)
fig = plt.figure(figsize=[10,7])
heatmap = sns.heatmap(df_cm, annot=True, fmt="d")
heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=0, ha='right', fontsize=14)
heatmap.xaxis.set_ticklabels(heatmap.xaxis.get_ticklabels(), rotation=45, ha='right', fontsize=14)
plt.ylabel('True label')
plt.xlabel('Predicted label')
# metrics from confusion matrix
tn, fp, fn, tp = cm.ravel()
recall = tp/(fn+tp)
precision = tp/(tp+fp)
print("True Negatives: " + str(tn))
print("False Positives: " + str(fp))
print("False Negatives: " + str(fn))
print("True Positives: " + str(tp))
print("Recall: " + str(recall))
print("Precision: " + str(precision))
###Output
True Negatives: 493
False Positives: 2
False Negatives: 9
True Positives: 431
Recall: 0.9795454545454545
Precision: 0.9953810623556582
###Markdown
Precision and recall provide a much more realistic picture of the model. It achieves a precision of around 68% and a recall of 52%. The recall is clearly more important for the use case and must therefore be improved at the expense of the precision.
###Code
# ROC-Kurve, AUC
fig, ax = plt.subplots(figsize=(8,6))
ax.set_title('ROC Kurve')
plot = metrics.plot_roc_curve(logistic_model, X_test, y_test, ax=ax);
ax.plot([0,1], [0,1], '--');
###Output
_____no_output_____
###Markdown
The AUC of the ROC curve yields a good value of 0.84. It can be concluded that there is potential for optimization by optimizing the threshold. 4.3. Interpretation First, however, the results for the business should be illustrated and it should be clarified which customers lead to churn and which speak against churn.
###Code
# Read out regression coefficients and thus find out importance of individual attributes
weights = pd.Series(logistic_model.coef_[0],
index=X_train.columns.values)
weights.sort_values(ascending = False)
# Graphical representation of key features that lead to churn.
weights = pd.Series(logistic_model.coef_[0],
index=X_train.columns.values)
print (weights.sort_values(ascending = False)[:7].plot(kind='bar'))
###Output
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
The three main features that cause churn are:- The fiber optic service (InternetService_Fiber optic),- the online payments (PaperlessBilling) and - the subscription of the additional movie streaming service (StreamingMovies_Yes).
###Code
# Most important features that keep customers from churning
print(weights.sort_values(ascending = False)[-8:].plot(kind='bar'))
###Output
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
The three most important features that keep customers from churning are:- The contracts that can be terminated for two years (Contract_Two year),- the time people have been customers of a company (Tenure) and - No subscription to the Internet service (InternetService_No). 4.4. Model Optimization The recall rate is too low as a target metric and must therefore be increased. Therefore, the metrics are analyzed at different thresholds of the logistic regression.
###Code
# Testing the metrics at different thresholds
threshold_list = [0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,.7,.75,.8,.85,.9,.95,.99]
pred_proba_df = y_hat
for i in threshold_list:
print ('\n******** For a Threshold about {} ******'.format(i))
# Round up if value is above threshold
y_test_pred = pred_proba_df.apply(lambda x: 1 if x>i else 0)
# read metrics
test_accuracy = metrics.accuracy_score(y_test, y_test_pred)
print("Accuracy: {}".format(test_accuracy))
# Confusion matrix
c = confusion_matrix(y_test, y_test_pred)
tn, fp, fn, tp = c.ravel()
recall = tp/(fn+tp)
precision = tp/(tp+fp)
# print metrics
print("True Negatives: " + str(tn))
print("False Positives: " + str(fp))
print("False Negatives: " + str(fn))
print("True Positives: " + str(tp))
print("Recall: " + str(recall))
print("Precision: " + str(precision))
###Output
******** For a Threshold about 0.05 ******
Accuracy: 0.8588235294117647
True Negatives: 367
False Positives: 128
False Negatives: 4
True Positives: 436
Recall: 0.990909090909091
Precision: 0.7730496453900709
******** For a Threshold about 0.1 ******
Accuracy: 0.9144385026737968
True Negatives: 420
False Positives: 75
False Negatives: 5
True Positives: 435
Recall: 0.9886363636363636
Precision: 0.8529411764705882
******** For a Threshold about 0.15 ******
Accuracy: 0.9422459893048128
True Negatives: 446
False Positives: 49
False Negatives: 5
True Positives: 435
Recall: 0.9886363636363636
Precision: 0.8987603305785123
******** For a Threshold about 0.2 ******
Accuracy: 0.9657754010695188
True Negatives: 468
False Positives: 27
False Negatives: 5
True Positives: 435
Recall: 0.9886363636363636
Precision: 0.9415584415584416
******** For a Threshold about 0.25 ******
Accuracy: 0.9786096256684492
True Negatives: 481
False Positives: 14
False Negatives: 6
True Positives: 434
Recall: 0.9863636363636363
Precision: 0.96875
******** For a Threshold about 0.3 ******
Accuracy: 0.9818181818181818
True Negatives: 486
False Positives: 9
False Negatives: 8
True Positives: 432
Recall: 0.9818181818181818
Precision: 0.9795918367346939
******** For a Threshold about 0.35 ******
Accuracy: 0.986096256684492
True Negatives: 490
False Positives: 5
False Negatives: 8
True Positives: 432
Recall: 0.9818181818181818
Precision: 0.988558352402746
******** For a Threshold about 0.4 ******
Accuracy: 0.9871657754010695
True Negatives: 491
False Positives: 4
False Negatives: 8
True Positives: 432
Recall: 0.9818181818181818
Precision: 0.9908256880733946
******** For a Threshold about 0.45 ******
Accuracy: 0.9893048128342246
True Negatives: 493
False Positives: 2
False Negatives: 8
True Positives: 432
Recall: 0.9818181818181818
Precision: 0.9953917050691244
******** For a Threshold about 0.5 ******
Accuracy: 0.9882352941176471
True Negatives: 493
False Positives: 2
False Negatives: 9
True Positives: 431
Recall: 0.9795454545454545
Precision: 0.9953810623556582
******** For a Threshold about 0.55 ******
Accuracy: 0.9882352941176471
True Negatives: 493
False Positives: 2
False Negatives: 9
True Positives: 431
Recall: 0.9795454545454545
Precision: 0.9953810623556582
******** For a Threshold about 0.6 ******
Accuracy: 0.9893048128342246
True Negatives: 494
False Positives: 1
False Negatives: 9
True Positives: 431
Recall: 0.9795454545454545
Precision: 0.9976851851851852
******** For a Threshold about 0.65 ******
Accuracy: 0.9893048128342246
True Negatives: 494
False Positives: 1
False Negatives: 9
True Positives: 431
Recall: 0.9795454545454545
Precision: 0.9976851851851852
******** For a Threshold about 0.7 ******
Accuracy: 0.9903743315508021
True Negatives: 495
False Positives: 0
False Negatives: 9
True Positives: 431
Recall: 0.9795454545454545
Precision: 1.0
******** For a Threshold about 0.75 ******
Accuracy: 0.9903743315508021
True Negatives: 495
False Positives: 0
False Negatives: 9
True Positives: 431
Recall: 0.9795454545454545
Precision: 1.0
******** For a Threshold about 0.8 ******
Accuracy: 0.9893048128342246
True Negatives: 495
False Positives: 0
False Negatives: 10
True Positives: 430
Recall: 0.9772727272727273
Precision: 1.0
******** For a Threshold about 0.85 ******
Accuracy: 0.9882352941176471
True Negatives: 495
False Positives: 0
False Negatives: 11
True Positives: 429
Recall: 0.975
Precision: 1.0
******** For a Threshold about 0.9 ******
Accuracy: 0.9871657754010695
True Negatives: 495
False Positives: 0
False Negatives: 12
True Positives: 428
Recall: 0.9727272727272728
Precision: 1.0
******** For a Threshold about 0.95 ******
Accuracy: 0.9807486631016042
True Negatives: 495
False Positives: 0
False Negatives: 18
True Positives: 422
Recall: 0.9590909090909091
Precision: 1.0
******** For a Threshold about 0.99 ******
Accuracy: 0.9497326203208556
True Negatives: 495
False Positives: 0
False Negatives: 47
True Positives: 393
Recall: 0.8931818181818182
Precision: 1.0
###Markdown
A threshold of 0.3 offers a better result for the application. It increases the recall to a satisfactory level of 73.21% at the expense of the precision. However, the precision is negligible. This results in the following values:
###Code
# Threshold about 0,3
y_test_pred = pred_proba_df.apply(lambda x: 1 if x>0.30 else 0)
test_accuracy = metrics.accuracy_score(y_test, y_test_pred)
c = confusion_matrix(y_test, y_test_pred)
# read values from confusion matrix
tn, fp, fn, tp = c.ravel()
recall = tp/(fn+tp)
precision = tp/(tp+fp)
print(classification_report(y_test,y_test_pred))
# create confusion matrix
print("Confusion matrix for the new threshold:")
df_cm = pd.DataFrame(c, index=['No Churn','Churn'], columns=['No Churn', 'Churn'],)
fig = plt.figure(figsize=[10,7])
heatmap = sns.heatmap(df_cm, annot=True, fmt="d")
heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=0, ha='right', fontsize=14)
heatmap.xaxis.set_ticklabels(heatmap.xaxis.get_ticklabels(), rotation=45, ha='right', fontsize=14)
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
print(" ")
# print metrics
print("Metrics for the new threshold:")
print("Accuracy: {}".format(test_accuracy))
print("True Negatives: " + str(tn))
print("False Positives: " + str(fp))
print("False Negatives: " + str(fn))
print("True Positives: " + str(tp))
print("Recall: " + str(recall))
print("Precision: " + str(precision))
###Output
precision recall f1-score support
0 0.98 0.98 0.98 495
1 0.98 0.98 0.98 440
accuracy 0.98 935
macro avg 0.98 0.98 0.98 935
weighted avg 0.98 0.98 0.98 935
Confusion matrix for the new threshold:
###Markdown
As expected, the rate of customers incorrectly classified as churn increases. In turn, however, the number of customers who are correctly predicted as churners (true positives) also increases. As elaborated in the term paper, this is essential, because in case of doubt, a customer would be falsely called by the service team and even perceive this call as good service and be bound to the company in the longer term. 5. Deployment
###Code
# Separate individual (scaled) customer
customer_df = X_test.iloc[896]
# Overview about the customer
customer_df
# execute prediction
cust_pred = logistic_model.predict([customer_df])
# evaluate results
def check_prediction(pred):
if pred[0] == 1:
print("The customer will probably churn! Inform Customer Relationship Management!")
else:
print("The customer probably will not churn.")
check_prediction(cust_pred)
###Output
The customer probably will not churn.
|
Spark Fundamentals I/2. rdd-operations.ipynb | ###Markdown
Spark Fundamentals I - Introduction to Spark Python - Working with RDD operations **Related free online courses:** Related courses can be found in the following learning paths:- [Spark Fundamentals path](http://cocl.us/Spark_Fundamentals_Path)- [Big Data Fundamentals path](http://cocl.us/Big_Data_Fundamentals_Path) Analyzing a log fileFirst let's download the tools that we need to use Spark in SN Labs.
###Code
!pip install findspark
!pip install pyspark
import findspark
findspark.init()
import pyspark
sc = pyspark.SparkContext.getOrCreate()
###Output
Requirement already satisfied: findspark in /home/jupyterlab/conda/envs/python/lib/python3.7/site-packages (1.4.2)
Requirement already satisfied: pyspark in /home/jupyterlab/conda/envs/python/lib/python3.7/site-packages (3.1.2)
Requirement already satisfied: py4j==0.10.9 in /home/jupyterlab/conda/envs/python/lib/python3.7/site-packages (from pyspark) (0.10.9)
###Markdown
If you completed the **Getting Started** lab, then you should have the data downloaded and unzipped in the _/resources/jupyterlab/labs/BD0211EN/LabData/_ directory. Otherwise, please uncomment **the last two lines of code** in each of the following cells to download and unzip the data.
###Code
## download the data from the IBM server
## this may take ~30 seconds depending on your interent speed
#!wget --quiet https://cocl.us/BD0211EN_Data
#print("Data Downloaded!")
## unzip the folder's content into "resources" directory
## this may take ~30 seconds depending on your internet speed
#!unzip -q -o -d /resources/jupyterlab/labs/BD0211EN/ BD0211EN_Data
#print("Data Extracted!")
# list the extracted files
!ls -1 /resources/labs/BD0211EN/LabData/
###Output
_____no_output_____
###Markdown
Now, let's create an RDD by loading the log file that we analyze in the Scala version of this lab.
###Code
logFile = sc.textFile("/resources/labs/BD0211EN/LabData/notebook.log")
###Output
_____no_output_____
###Markdown
YOUR TURN: In the cell below, filter out the lines that contains INFO
###Code
# WRITE YOUR CODE BELOW
linesWithINFO = logFile.filter(lambda line: "INFO" in line)
###Output
_____no_output_____
###Markdown
Double-click **here** for the solution.<!-- The correct answer is:info = logFile.filter(lambda line: "INFO" in line)--> Count the lines:
###Code
# WRITE YOUR CODE BELOW
linesWithINFO.count()
###Output
###Markdown
Double-click **here** for the solution.<!-- The correct answer is:info.count()--> Count the lines with "spark" in it by combining transformation and action.
###Code
# WRITE YOUR CODE BELOW
logFile.filter(lambda line: "spark" in line).count()
###Output
_____no_output_____
###Markdown
Double-click **here** for the solution.<!-- The correct answer is:info.filter(lambda line: "spark" in line).count()--> Fetch those lines as an array of Strings
###Code
# WRITE YOUR CODE BELOW
logFile.filter(lambda line: "spark" in line).collect()
###Output
_____no_output_____
###Markdown
Double-click **here** for the solution.<!-- The correct answer is:info.filter(lambda line: "spark" in line).collect()--> View the graph of an RDD using this command:
###Code
print(linesWithINFO.toDebugString())
###Output
b'(2) PythonRDD[7] at RDD at PythonRDD.scala:53 []\n | /resources/labs/BD0211EN/LabData/notebook.log MapPartitionsRDD[3] at textFile at NativeMethodAccessorImpl.java:0 []\n | /resources/labs/BD0211EN/LabData/notebook.log HadoopRDD[2] at textFile at NativeMethodAccessorImpl.java:0 []'
###Markdown
Joining RDDsNext, you are going to create RDDs for the same README and the POM files that we used in the Scala version.
###Code
readmeFile = sc.textFile("/resources/labs/BD0211EN/LabData/README.md")
pomFile = sc.textFile("/resources/labs/BD0211EN/LabData/pom.xml")
###Output
_____no_output_____
###Markdown
How many Spark keywords are in each file?
###Code
print(readmeFile.filter(lambda line: "Spark" in line).count())
print(pomFile.filter(lambda line: "Spark" in line).count())
###Output
18
2
###Markdown
Now do a WordCount on each RDD so that the results are (K,V) pairs of (word,count)
###Code
readmeCount = readmeFile. \
flatMap(lambda line: line.split(" ")). \
map(lambda word: (word, 1)). \
reduceByKey(lambda a, b: a + b)
pomCount = pomFile. \
flatMap(lambda line: line.split(" ")). \
map(lambda word: (word, 1)). \
reduceByKey(lambda a, b: a + b)
###Output
_____no_output_____
###Markdown
To see the array for either of them, just call the collect function on it.
###Code
print("Readme Count\n")
print(readmeCount.collect())
print("Pom Count\n")
print(pomCount.collect())
###Output
Pom Count
[('<?xml version="1.0" encoding="UTF-8"?>', 1), (' ~ Licensed to the Apache Software Foundation (ASF) under one or more', 1), (' ~ contributor license agreements. See the NOTICE file distributed with', 1), (' ~ The ASF licenses this file to You under the Apache License, Version 2.0', 1), (' http://www.apache.org/licenses/LICENSE-2.0', 1), (' ~ distributed under the License is distributed on an "AS IS" BASIS,', 1), (' ~ limitations under the License.', 1), (' -->', 1), ('', 841), ('<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">', 1), (' <modelVersion>4.0.0</modelVersion>', 1), (' <parent>', 1), (' <groupId>org.apache.spark</groupId>', 2), (' <artifactId>spark-parent_2.10</artifactId>', 1), (' <version>1.6.0-SNAPSHOT</version>', 1), (' <properties>', 1), (' <sbt.project.name>examples</sbt.project.name>', 1), (' </properties>', 1), (' <packaging>jar</packaging>', 1), (' <dependencies>', 1), (' <dependency>', 24), ('<version>${project.version}</version>', 11), (' </dependency>', 24), ('<artifactId>spark-streaming_${scala.binary.version}</artifactId>', 1), ('<artifactId>spark-bagel_${scala.binary.version}</artifactId>', 1), ('<artifactId>spark-hive_${scala.binary.version}</artifactId>', 1), ('<artifactId>spark-graphx_${scala.binary.version}</artifactId>', 1), ('<artifactId>spark-streaming-flume_${scala.binary.version}</artifactId>', 1), ('<exclusions>', 6), (' <artifactId>protobuf-java</artifactId>', 1), (' <!-- SPARK-4455 -->', 4), (' <groupId>org.apache.hbase</groupId>', 5), (' <artifactId>hbase-annotations</artifactId>', 4), (' <artifactId>jruby-complete</artifactId>', 1), ('<artifactId>hbase-protocol</artifactId>', 1), ('<artifactId>hbase-common</artifactId>', 1), (' <exclusion>', 1), (' <artifactId>netty</artifactId>', 1), (' <artifactId>hadoop-core</artifactId>', 1), (' <artifactId>hadoop-mapreduce-client-core</artifactId>', 1), (' <artifactId>hadoop-annotations</artifactId>', 1), (' <artifactId>commons-math</artifactId>', 1), (' <groupId>com.sun.jersey</groupId>', 4), (' <artifactId>jersey-core</artifactId>', 2), (' <groupId>org.slf4j</groupId>', 1), (' <artifactId>slf4j-api</artifactId>', 1), (' <artifactId>commons-io</artifactId>', 1), ('<scope>test</scope>', 2), ('<artifactId>commons-math3</artifactId>', 1), ('<groupId>com.twitter</groupId>', 1), ('<groupId>org.scalacheck</groupId>', 1), ('<artifactId>cassandra-all</artifactId>', 1), ('<version>1.2.6</version>', 1), (' <groupId>com.googlecode.concurrentlinkedhashmap</groupId>', 1), (' <artifactId>commons-cli</artifactId>', 1), (' <groupId>commons-codec</groupId>', 1), (' <groupId>commons-lang</groupId>', 1), (' <artifactId>commons-lang</artifactId>', 1), (' <groupId>commons-logging</groupId>', 1), (' <artifactId>commons-logging</artifactId>', 1), (' <artifactId>netty</artifactId>', 1), (' <groupId>jline</groupId>', 1), (' <groupId>org.apache.cassandra.deps</groupId>', 1), (' <artifactId>avro</artifactId>', 1), ('<groupId>com.github.scopt</groupId>', 1), ('<artifactId>scopt_${scala.binary.version}</artifactId>', 1), ('<version>3.2.0</version>', 1), ('them to be provided.', 1), (' </dependencies>', 1), (' <build>', 1), (' <outputDirectory>target/scala-${scala.binary.version}/classes</outputDirectory>', 1), (' <testOutputDirectory>target/scala-${scala.binary.version}/test-classes</testOutputDirectory>', 1), ('<plugin>', 3), (' <groupId>org.apache.maven.plugins</groupId>', 3), (' <artifactId>maven-deploy-plugin</artifactId>', 1), (' <skip>true</skip>', 2), (' </configuration>', 3), ('</plugin>', 3), (' <artifactId>maven-shade-plugin</artifactId>', 1), (' <shadedArtifactAttached>false</shadedArtifactAttached>', 1), (' <outputFile>${project.build.directory}/scala-${scala.binary.version}/spark-examples-${project.version}-hadoop${hadoop.version}.jar</outputFile>', 1), (' <artifactSet>', 1), ('<includes>', 1), (' </artifactSet>', 1), ('<filter>', 1), (' <artifact>*:*</artifact>', 1), (' <exclude>META-INF/*.DSA</exclude>', 1), (' <exclude>META-INF/*.RSA</exclude>', 1), (' </excludes>', 1), ('</filter>', 1), (' </filters>', 1), ('<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer" />', 1), ('</transformer>', 2), ('<transformer implementation="org.apache.maven.plugins.shade.resource.DontIncludeResourceTransformer">', 1), (' <resource>log4j.properties</resource>', 1), (' </build>', 1), ('<dependencies>', 1), (' <artifactId>spark-streaming-kinesis-asl_${scala.binary.version}</artifactId>', 1), ('</dependencies>', 1), (' </profile>', 6), (' <flume.deps.scope>provided</flume.deps.scope>', 1), (' <hadoop.deps.scope>provided</hadoop.deps.scope>', 1), ('<id>hbase-provided</id>', 1), (' <hbase.deps.scope>provided</hbase.deps.scope>', 1), ('<id>parquet-provided</id>', 1), (' <parquet.deps.scope>provided</parquet.deps.scope>', 1), (' </profiles>', 1), ('<!--', 1), (' ~ this work for additional information regarding copyright ownership.', 1), (' ~ (the "License"); you may not use this file except in compliance with', 1), (' ~ the License. You may obtain a copy of the License at', 1), (' ~', 3), (' ~ Unless required by applicable law or agreed to in writing, software', 1), (' ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.', 1), (' ~ See the License for the specific language governing permissions and', 1), (' <relativePath>../pom.xml</relativePath>', 1), (' </parent>', 1), (' <groupId>org.apache.spark</groupId>', 1), (' <artifactId>spark-examples_2.10</artifactId>', 1), (' <name>Spark Project Examples</name>', 1), (' <url>http://spark.apache.org/</url>', 1), ('<groupId>org.apache.spark</groupId>', 11), ('<artifactId>spark-core_${scala.binary.version}</artifactId>', 1), ('<scope>provided</scope>', 8), ('<artifactId>spark-mllib_${scala.binary.version}</artifactId>', 1), ('<artifactId>spark-streaming-twitter_${scala.binary.version}</artifactId>', 1), ('<artifactId>spark-streaming-mqtt_${scala.binary.version}</artifactId>', 1), ('<artifactId>spark-streaming-zeromq_${scala.binary.version}</artifactId>', 1), (' <exclusion>', 34), (' <groupId>org.spark-project.protobuf</groupId>', 1), (' </exclusion>', 34), ('</exclusions>', 5), ('<artifactId>spark-streaming-kafka_${scala.binary.version}</artifactId>', 1), ('<groupId>org.apache.hbase</groupId>', 7), ('<artifactId>hbase-testing-util</artifactId>', 1), ('<version>${hbase.version}</version>', 7), ('<scope>${hbase.deps.scope}</scope>', 6), (' <groupId>org.jruby</groupId>', 1), ('<artifactId>hbase-client</artifactId>', 1), (' <groupId>io.netty</groupId>', 1), (' </exclusion>', 1), (' </exclusions>', 1), ('<artifactId>hbase-server</artifactId>', 1), (' <groupId>org.apache.hadoop</groupId>', 7), (' <artifactId>hadoop-client</artifactId>', 1), (' <artifactId>hadoop-mapreduce-client-jobclient</artifactId>', 1), (' <artifactId>hadoop-auth</artifactId>', 1), (' <artifactId>hadoop-hdfs</artifactId>', 1), (' <artifactId>hbase-hadoop1-compat</artifactId>', 1), (' <groupId>org.apache.commons</groupId>', 2), (' <artifactId>jersey-server</artifactId>', 1), (' <artifactId>jersey-json</artifactId>', 1), (' <!-- hbase uses v2.4, which is better, but ...-->', 1), (' <groupId>commons-io</groupId>', 1), ('<artifactId>hbase-hadoop-compat</artifactId>', 2), ('<type>test-jar</type>', 1), ('<groupId>org.apache.commons</groupId>', 1), ('<artifactId>algebird-core_${scala.binary.version}</artifactId>', 1), ('<version>0.9.0</version>', 1), ('<artifactId>scalacheck_${scala.binary.version}</artifactId>', 1), ('<groupId>org.apache.cassandra</groupId>', 1), (' <groupId>com.google.guava</groupId>', 1), (' <artifactId>guava</artifactId>', 1), (' <artifactId>concurrentlinkedhashmap-lru</artifactId>', 1), (' <groupId>com.ning</groupId>', 1), (' <artifactId>compress-lzf</artifactId>', 1), (' <groupId>commons-cli</groupId>', 1), (' <artifactId>commons-codec</artifactId>', 1), (' <groupId>io.netty</groupId>', 1), (' <artifactId>jline</artifactId>', 1), (' <groupId>net.jpountz.lz4</groupId>', 1), (' <artifactId>lz4</artifactId>', 1), (' <artifactId>commons-math3</artifactId>', 1), (' <groupId>org.apache.thrift</groupId>', 1), (' <artifactId>libthrift</artifactId>', 1), (' <!--', 1), ('The following dependencies are already present in the Spark assembly, so we want to force', 1), (' -->', 1), ('<groupId>org.scala-lang</groupId>', 1), ('<artifactId>scala-library</artifactId>', 1), (' <plugins>', 1), (' <configuration>', 3), (' <artifactId>maven-install-plugin</artifactId>', 1), (' <include>*:*</include>', 1), ('</includes>', 1), (' <filters>', 1), (' <excludes>', 1), (' <exclude>META-INF/*.SF</exclude>', 1), (' <transformers>', 1), ('<transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">', 1), (' <resource>reference.conf</resource>', 1), (' </transformers>', 1), (' </plugins>', 1), (' <profiles>', 1), (' <profile>', 6), ('<id>kinesis-asl</id>', 1), (' <dependency>', 1), (' <version>${project.version}</version>', 1), (' </dependency>', 1), (' <!-- Profiles that disable inclusion of certain dependencies. -->', 1), ('<id>flume-provided</id>', 1), ('<properties>', 5), ('</properties>', 5), ('<id>hadoop-provided</id>', 1), ('<id>hive-provided</id>', 1), (' <hive.deps.scope>provided</hive.deps.scope>', 1), ('</project>', 1)]
###Markdown
The join function combines the two datasets (K,V) and (K,W) together and get (K, (V,W)). Let's join these two counts together.
###Code
joined = readmeCount.join(pomCount)
###Output
_____no_output_____
###Markdown
Print the value to the console
###Code
joined.collect()
###Output
_____no_output_____
###Markdown
Let's combine the values together to get the total count
###Code
joinedSum = joined.map(lambda k: (k[0], (k[1][0]+k[1][1])))
###Output
_____no_output_____
###Markdown
To check if it is correct, print the first five elements from the joined and the joinedSum RDD
###Code
print("Joined Individial\n")
print(joined.take(5))
print("\n\nJoined Sum\n")
print(joinedSum.take(5))
###Output
Joined Individial
[('', (43, 841))]
Joined Sum
[('', 884)]
###Markdown
Shared variablesNormally, when a function passed to a Spark operation (such as map or reduce) is executed on a remote cluster node, it works on separate copies of all the variables used in the function. These variables are copied to each machine, and no updates to the variables on the remote machine are propagated back to the driver program. Supporting general, read-write shared variables across tasks would be inefficient. However, Spark does provide two limited types of shared variables for two common usage patterns: broadcast variables and accumulators. Broadcast variablesBroadcast variables are useful for when you have a large dataset that you want to use across all the worker nodes. A read-only variable is cached on each machine rather than shipping a copy of it with tasks. Spark actions are executed through a set of stages, separated by distributed “shuffle” operations. Spark automatically broadcasts the common data needed by tasks within each stage.Read more here: [http://spark.apache.org/docs/latest/programming-guide.htmlbroadcast-variables](http://spark.apache.org/docs/latest/programming-guide.htmlbroadcast-variables?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-BD0211EN-SkillsNetwork-24237012&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-BD0211EN-SkillsNetwork-24237012&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)Create a broadcast variable. Type in:
###Code
broadcastVar = sc.broadcast([1,2,3])
###Output
_____no_output_____
###Markdown
To get the value, type in:
###Code
broadcastVar.value
###Output
_____no_output_____
###Markdown
AccumulatorsAccumulators are variables that can only be added through an associative operation. It is used to implement counters and sum efficiently in parallel. Spark natively supports numeric type accumulators and standard mutable collections. Programmers can extend these for new types. Only the driver can read the values of the accumulators. The workers can only invoke it to increment the value.Create the accumulator variable. Type in:
###Code
accum = sc.accumulator(0)
###Output
_____no_output_____
###Markdown
Next parallelize an array of four integers and run it through a loop to add each integer value to the accumulator variable. Type in:
###Code
rdd = sc.parallelize([1,2,3,4])
def f(x):
global accum
accum = accum + x
###Output
_____no_output_____
###Markdown
Next, iterate through each element of the rdd and apply the function f on it:
###Code
rdd.foreach(f)
###Output
_____no_output_____
###Markdown
To get the current value of the accumulator variable, type in:
###Code
accum.value
###Output
_____no_output_____
###Markdown
You should get a value of 10.This command can only be invoked on the driver side. The worker nodes can only increment the accumulator. Key-value pairsYou have already seen a bit about key-value pairs in the Joining RDD section.Create a key-value pair of two characters. Type in:
###Code
pair = ('a', 'b')
###Output
_____no_output_____
###Markdown
To access the value of the first index use [0] and [1] method for the 2nd.
###Code
print(pair[0])
print(pair[1])
###Output
a
b
|
Wi19_content/DSMCER/L11_PCA_Kmeans_filled.ipynb | ###Markdown
K-means clustering
###Code
harvard = pd.read_csv('https://raw.githubusercontent.com/UWDIRECT/UWDIRECT.github.io/master/Wi18_content/DSMCER/HCEPD_100K.csv')
###Output
_____no_output_____
###Markdown
Perform k-means clustering on the harvard data. Vary the `k` from 5 to 50 in increments of 5. Run the `k-means` using a loop.Be sure to use standard scalar normalization!`StandardScaler().fit_transform(dataArray)` Use [silhouette](http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html) analysis to pick a good k. PCA and plottingThe functions below may be helpful but they aren't tested in this notebook.
###Code
def make_pca(array, components):
pca = PCA(n_components=components, svd_solver='full')
pca.fit(array)
return pca
def plot_pca(pca, array, outplt, x_axis, y_axis, colorList=None):
markers = pca.transform(array)
plt.scatter(markers[:, x_axis], markers[:, y_axis], color='c')
if colorList is not None:
x_markers = [markers[i, x_axis] for i in colorList]
y_markers = [markers[i, y_axis] for i in colorList]
plt.scatter(x_markers, y_markers, color='m')
plt.xlabel("Component 1 ({}%)".format(pca.explained_variance_ratio_[x_axis]*
100))
plt.ylabel("Component 2 ({}%)".format(pca.explained_variance_ratio_[y_axis]*
100))
plt.tight_layout()
plt.savefig(outplt)
def plot_clusters(pca, array, outplt, x_axis, y_axis, colorList=None):
xkcd = [x.rstrip("\n") for x in open("xkcd_colors.txt")]
markers = pca.transform(array)
if colorList is not None:
colors = [xkcd[i] for i in colorList]
else:
colors = ['c' for i in range(len(markers))]
plt.scatter(markers[:, x_axis], markers[:, y_axis], color=colors)
plt.xlabel("Component {0} ({1:.2f}%)".format((x_axis+1), pca.explained_varia
nce_ratio_[x_axis]*100))
plt.ylabel("Component {0} ({1:.2f}%)".format((y_axis+1), pca.explained_varia
nce_ratio_[y_axis]*100))
plt.tight_layout()
plt.savefig(outplt)
data = pd_to_np(df)
pca = make_pca(data, args.n_components)
print(pca.explained_variance_ratio_)
###Output
_____no_output_____ |
Analysis/analysis.ipynb | ###Markdown
Author : Roberto Berwa, MIT Topic : Covid-19 Analysis Lang : Julia Date : 04/08/2021 I. Setup
###Code
using Plots
using Dates
using DataFrames
using Interact
using CSV
url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv";
download(url,"covid_data.csv")
data = CSV.read("covid_data.csv");
###Output
_____no_output_____
###Markdown
II. Data manipulation
###Code
data = rename(data, 1 =>"Province", 2 => "Country") # Renaming columns
all_countries = unique(data[1:end, 2])
countries = ["US", "China", "Japan", "Korea, South", "United Kingdom", "France", "Germany"] #country study focus
num_days = length(data[1, 5:end])
data_country_dict = Dict()
for i in 1:length(countries)
d = aggregate(data[data.Country .== countries[i], 5: end], sum)[1,:]
data_country_dict[countries[i]] = convert(Vector, d)
end
###Output
_____no_output_____
###Markdown
III. Data Visualization Fig 1: Confirmed Cases around the world per day
###Code
@manipulate for day in slider(1:num_days, value=1)
p = plot(xlim=(0, num_days+5), ylim=(0,400000))
for country in keys(data_country_dict)
plot!(data_country_dict[country][1:day], label = country, leg=:topleft, m=:o)
end
xlabel!("days")
ylabel!("Confirmed Cases")
title!("Confirmed Cases around the world")
p
end
###Output
_____no_output_____
###Markdown
Fig 1: Confirmed Cases around the world Checking Exponential Growth
###Code
log_data_country_dict = Dict()
for (key, value) in data_country_dict
value = convert(Array{Float64}, value)
value[value .== 0.0] .= NaN
log_data_country_dict[key] = log.(value)
end
plog = plot(xlim=(0, num_days))
for country in keys(log_data_country_dict)
plot!(log_data_country_dict[country], label = country, leg=:bottomright)
end
xlabel!("days")
ylabel!("log Confirmed Cases")
title!("Confirmed Cases around the world")
plog
###Output
_____no_output_____
###Markdown
Fig 2: log confirmed cases per day From this fig 2, all the seven countries experiences exponential growth. Visualizing changes
###Code
weekly_data = Dict()
final = 0
for country in countries
country_data = data_country_dict[country]
country_data = convert(Array{Float64}, country_data)
country_data[country_data .== 0.0] .= NaN
country_data_weekly = []
for i in num_days:-1:1 # Monday to Monday
if i <= 7
append!(country_data_weekly, country_data[i])
else
append!(country_data_weekly, [country_data[i] - country_data[i-7]]) #Format: Sunday to Sunday
end
end
weekly_data[country] = reverse(country_data_weekly)
end
weekly_data
@manipulate for day in slider(1:num_days, value=1)
s = plot(xlim = (0, 20), ylim = (0, 20))
for country in countries
y = log.(weekly_data[country][1:day])
x = log.(data_country_dict[country][1:day])
plot!(x, y, label = country, leg=:topleft)
#scatter!(x[day], y[day])
annotate!(x[day], y[day], text(country, 10, :black))
end
xlabel!("Total cases")
ylabel!("Recent cases")
title!("Confirmed Cases around the world")
end
###Output
_____no_output_____ |
site/en/guide/function_mine.ipynb | ###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Better performance with tf.function View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In TensorFlow 2, [eager execution](eager.ipynb) is turned on by default. The user interface is intuitive and flexible (running one-off operations is much easier and faster), but this can come at the expense of performance and deployability.You can use `tf.function` to make graphs out of your programs. It is a transformation tool that creates Python-independent dataflow graphs out of your Python code. This will help you create performant and portable models, and it is required to use `SavedModel`.This guide will help you conceptualize how `tf.function` works under the hood, so you can use it effectively.The main takeaways and recommendations are:- Debug in eager mode, then decorate with `@tf.function`.- Don't rely on Python side effects like object mutation or list appends.- `tf.function` works best with TensorFlow ops; NumPy and Python calls are converted to constants. Setup
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Define a helper function to demonstrate the kinds of errors you might encounter:
###Code
# import traceback
# import contextlib
# # Some helper code to demonstrate the kinds of errors you might encounter.
# @contextlib.contextmanager
# def assert_raises(error_class):
# try:
# ''' This yield asks the context manager to do nothing, send the control back to the context manager
# content/definition and execute its body/definition. After execution of the body, it comes back
# to the statement just below yield(that's how yield works - refer - https://docs.google.com/spreadsheets/d/1xgeSi_yDPVySBPFpNLhMPzJBSPFZvd3NrZuNUy9OkFM/edit#gid=310581437&range=G21)
# '''
# yield
# except error_class as e:
# print('Caught expected exception \n {}:'.format(error_class))
# traceback.print_exc(limit=2)
# except Exception as e:
# raise e
# else:
# raise Exception('Expected {} to be raised but no error was raised!'.format(
# error_class))
import traceback
import contextlib
@contextlib.contextmanager
def assert_raises(error_class):
try:
''' This yield asks the context manager to do nothing, send the control back to the context manager
content/definition and execute its body/definition. After execution of the body, it comes back
to the statement just below yield(that's how yield works - refer - https://docs.google.com/spreadsheets/d/1xgeSi_yDPVySBPFpNLhMPzJBSPFZvd3NrZuNUy9OkFM/edit#gid=310581437&range=G21)
'''
yield
except error_class as e:
print('Caught exception as expected : {}'.format(error_class) )
traceback.print_exc(limit=2)
except Exception as e:
raise e
else:
raise Exception('Expected {} to be raised, but, no error raised!'.format(error_class) )
###Output
_____no_output_____
###Markdown
Basics UsageA `Function` you define (for example by applying the `@tf.function` decorator) is just like a core TensorFlow operation: You can execute it eagerly; you can compute gradients; and so on.
###Code
@tf.function # The decorator converts `add` into a `Function`.
def add(a, b):
return a + b
add(tf.ones([2, 2]), tf.ones([2, 2])) # [[2., 2.], [2., 2.]]
v = tf.Variable(1.0)
with tf.GradientTape() as tape:
result = add(v, 1.0)
tape.gradient(result, v)
###Output
_____no_output_____
###Markdown
You can use `Function`s inside other `Function`s.
###Code
@tf.function
def dense_layer(x, w, b):
return add(tf.matmul(x, w), b)
dense_layer(tf.ones([3, 2]), tf.ones([2, 2]), tf.ones([2]))
###Output
_____no_output_____
###Markdown
Imp-`Function`s can be faster than eager code, especially for graphs with many small ops. But for graphs with a few expensive ops (like convolutions), you may not see much speedup.
###Code
import timeit
conv_layer = tf.keras.layers.Conv2D(100, 3)
@tf.function
def conv_fn(img):
return conv_layer(img)
img = tf.zeros([1,200,200,100])
# warm-up; to offset the fixed costs of the first run
conv_fn(img); conv_layer(img)
print("Eager exec. results = {}".format( timeit.timeit( lambda:conv_layer(img), number=10000 ) ) )
print("tf.function exec. results = {}".format( timeit.timeit( lambda:conv_fn(img), number=10000 ) ) )
print("Check that there is not much of a diff.")
###Output
Eager exec. results = 20.446560036973096
tf.function exec. results = 21.122948333009845
Check that there is not much of a diff.
###Markdown
vvvvvImp & Good - TracingThis section exposes how `Function` works under the hood, including implementation details *which may change in the future*. However, once you understand why and when tracing happens, it's much easier to use `tf.function` effectively! What is "tracing"?A `Function` runs your program in a [TensorFlow Graph](https://www.tensorflow.org/guide/intro_to_graphswhat_are_graphs). However, a `tf.Graph` cannot represent all the things that you'd write in an eager TensorFlow program. For instance, Python supports polymorphism, but `tf.Graph` requires its inputs to have a specified data type and dimension. Or you may perform side tasks like reading command-line arguments, raising an error, or working with a more complex Python object; none of these things can run in a `tf.Graph`.`Function` bridges this gap by separating your code in two stages: 1) In the first stage, referred to as "**tracing**", `Function` creates a new `tf.Graph`. Python code runs normally, but all TensorFlow operations (like adding two Tensors) are *deferred*: they are captured by the `tf.Graph` and not run. 2) In the second stage, a `tf.Graph` which contains everything that was deferred in the first stage is run. This stage is much faster than the tracing stage.Depending on its inputs, `Function` will not always run the first stage when it is called. See ["Rules of tracing"](rules_of_tracing) below to get a better sense of how it makes that determination. Skipping the first stage and only executing the second stage is what gives you TensorFlow's high performance.When `Function` does decide to trace, the tracing stage is immediately followed by the second stage, so calling the `Function` both creates and runs the `tf.Graph`. Later you will see how you can run only the tracing stage with [`get_concrete_function`](obtaining_concrete_functions). When we pass arguments of different types into a `Function`, both stages are run: Notice, how, in the below code, once the tf.Graph is created for the signature/daatatype, the print statementdoes not get invoked
###Code
@tf.function
def dbl(a):
print("Trace of ", a)
return a+a
print( dbl(tf.constant(1)), "\n" )
print( dbl(tf.constant(5.2)), '\n' )
print( dbl(tf.constant(109.4)), '\n' )
print( dbl(tf.constant('a')), '\n' )
###Output
Trace of Tensor("a:0", shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
Trace of Tensor("a:0", shape=(), dtype=float32)
tf.Tensor(10.4, shape=(), dtype=float32)
tf.Tensor(218.8, shape=(), dtype=float32)
Trace of Tensor("a:0", shape=(), dtype=string)
tf.Tensor(b'aa', shape=(), dtype=string)
###Markdown
vvvImp & Good - Note that if you repeatedly call a `Function` with the same argument type, TensorFlow will skip the tracing stage and reuse a previously traced graph, as the generated graph would be identical.
###Code
# This doesn't print 'Tracing with ...'
print(dbl(tf.constant("b")))
###Output
tf.Tensor(b'bb', shape=(), dtype=string)
###Markdown
You can use `pretty_printed_concrete_signatures()` to see all of the available traces:
###Code
print( dbl.pretty_printed_concrete_signatures())
###Output
dbl(a)
Args:
a: int32 Tensor, shape=()
Returns:
int32 Tensor, shape=()
dbl(a)
Args:
a: float32 Tensor, shape=()
Returns:
float32 Tensor, shape=()
dbl(a)
Args:
a: string Tensor, shape=()
Returns:
string Tensor, shape=()
###Markdown
So far, you've seen that `tf.function` creates a cached, dynamic dispatch layer over TensorFlow's graph tracing logic. To be more specific about the terminology:- A `tf.Graph` is the raw, language-agnostic, portable representation of a TensorFlow computation.- A `ConcreteFunction` wraps a `tf.Graph`.- A `Function` manages a cache of `ConcreteFunction`s and picks the right one for your inputs.- `tf.function` wraps a Python function, returning a `Function` object.- **Tracing** creates a `tf.Graph` and wraps it in a `ConcreteFunction`, also known as a **trace.** Rules of tracingA `Function` determines whether to reuse a traced `ConcreteFunction` by computing a **cache key** from an input's args and kwargs. A **cache key** is a key that identifies a `ConcreteFunction` based on the input args and kwargs of the `Function` call, according to the following rules (which may change): - The key generated for a `tf.Tensor` is its shape and dtype.- The key generated for a `tf.Variable` is a unique variable id.- The key generated for a Python primitive (like `int`, `float`, `str`) is its value. - The key generated for nested `dict`s, `list`s, `tuple`s, `namedtuple`s, and [`attr`](https://www.attrs.org/en/stable/)s is the flattened tuple of leaf-keys (see `nest.flatten`). (As a result of this flattening, calling a concrete function with a different nesting structure than the one used during tracing will result in a TypeError).- For all other Python types the key is unique to the object. This way a function or method is traced independently for each instance it is called with. Note: Cache keys are based on the `Function` input parameters so changes to global and [free variables](https://docs.python.org/3/reference/executionmodel.htmlbinding-of-names) alone will not create a new trace. See [this section](depending_on_python_global_and_free_variables) for recommended practices when dealing with Python global and free variables. Controlling retracingRetracing, which is when your `Function` creates more than one trace, helps ensures that TensorFlow generates correct graphs for each set of inputs. However, tracing is an expensive operation! If your `Function` retraces a new graph for every call, you'll find that your code executes more slowly than if you didn't use `tf.function`.To control the tracing behavior, you can use the following techniques: - Specify `input_signature` in `tf.function` to limit tracing.
###Code
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),) )
def next_fn(x):
print("Trace of ", x)
return tf.where(x%2 == 0, x//2, 3*x+2)
print(next_fn(tf.constant([1,2])) )
# You specified a 1-D tensor in the input signature, so this should fail.
if 0:
with assert_raises(ValueError):
next_fn(tf.constant([ [1,2],[3,4] ]))
# You specified an int32 dtype in the input signature, so this should fail.
with assert_raises(ValueError):
next_fn(tf.constant([1.,2.]) )
###Output
Trace of Tensor("x:0", shape=(None,), dtype=int32)
tf.Tensor([5 1], shape=(2,), dtype=int32)
Caught exception as expected : <class 'ValueError'>
###Markdown
- Specify a \[None\] dimension in `tf.TensorSpec` to allow for flexibility in trace reuse. Since TensorFlow matches tensors based on their shape, using a `None` dimension as a wildcard will allow `Function`s to reuse traces for variably-sized input. Variably-sized input can occur if you have sequences of different length, or images of different sizes for each batch (See the [Transformer](../tutorials/text/transformer.ipynb) and [Deep Dream](../tutorials/generative/deepdream.ipynb) tutorials for example).
###Code
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))
def g(x):
print('Tracing with', x)
return x
# No retrace!
print(g(tf.constant([1, 2, 3])))
print(g(tf.constant([1, 2, 3, 4, 5])))
###Output
Tracing with Tensor("x:0", shape=(None,), dtype=int32)
tf.Tensor([1 2 3], shape=(3,), dtype=int32)
tf.Tensor([1 2 3 4 5], shape=(5,), dtype=int32)
###Markdown
- Cast Python arguments to Tensors to reduce retracing. Often, Python arguments are used to control hyperparameters and graph constructions - for example, `num_layers=10` or `training=True` or `nonlinearity='relu'`. So, if the Python argument changes, it makes sense that you'd have to retrace the graph. However, it's possible that a Python argument is not being used to control graph construction. In these cases, a change in the Python value can trigger needless retracing. Take, for example, this training loop, which AutoGraph will dynamically unroll. Despite the multiple traces, the generated graph is actually identical, so retracing is unnecessary.
###Code
def train_one_step():
pass
@tf.function
def train(num_steps):
print("Tracing with num_steps = ", num_steps)
tf.print("Executing with num_steps = ", num_steps)
for _ in tf.range(num_steps):
train_one_step()
print("Retracing occurs for different Python arguments.")
train(num_steps=10)
train(num_steps=20)
print()
print("Traces are reused for Tensor arguments.")
train(num_steps=tf.constant(10))
train(num_steps=tf.constant(20))
###Output
Retracing occurs for different Python arguments.
Tracing with num_steps = 10
Executing with num_steps = 10
Tracing with num_steps = 20
Executing with num_steps = 20
Traces are reused for Tensor arguments.
Tracing with num_steps = Tensor("num_steps:0", shape=(), dtype=int32)
Executing with num_steps = 10
Executing with num_steps = 20
###Markdown
If you need to force retracing, create a new `Function`. Separate `Function` objects are guaranteed not to share traces.
###Code
def f():
print('Tracing!')
tf.print('Executing')
tf.function(f)()
tf.function(f)()
###Output
Tracing!
Executing
Tracing!
Executing
###Markdown
Obtaining concrete functionsEvery time a function is traced, a new concrete function is created. You can directly obtain a concrete function, by using `get_concrete_function`.
###Code
print("Obtaining concrete trace")
double_strings = dbl.get_concrete_function(tf.constant("a"))
print("Executing traced function")
print(double_strings(tf.constant("a")))
print(double_strings(a=tf.constant("b")))
# You can also call get_concrete_function on an InputSpec
double_strings_from_inputspec = dbl.get_concrete_function(tf.TensorSpec(shape=[], dtype=tf.string))
print(double_strings_from_inputspec(tf.constant("c")))
###Output
Trace of Tensor("a:0", shape=(), dtype=string)
tf.Tensor(b'cc', shape=(), dtype=string)
###Markdown
Printing a `ConcreteFunction` displays a summary of its input arguments (with types) and its output type.
###Code
print(double_strings)
###Output
ConcreteFunction dbl(a)
Args:
a: string Tensor, shape=()
Returns:
string Tensor, shape=()
###Markdown
You can also directly retrieve a concrete function's signature.
###Code
print(double_strings.structured_input_signature)
print(double_strings.structured_outputs)
###Output
((TensorSpec(shape=(), dtype=tf.string, name='a'),), {})
Tensor("Identity:0", shape=(), dtype=string)
###Markdown
Using a concrete trace with incompatible types will throw an error
###Code
with assert_raises(tf.errors.InvalidArgumentError):
double_strings(tf.constant(1))
###Output
Caught exception as expected : <class 'tensorflow.python.framework.errors_impl.InvalidArgumentError'>
###Markdown
You may notice that Python arguments are given special treatment in a concrete function's input signature. Prior to TensorFlow 2.3, Python arguments were simply removed from the concrete function's signature. Starting with TensorFlow 2.3, Python arguments remain in the signature, but are constrained to take the value set during tracing.
###Code
@tf.function
def pow(a, b):
return a ** b
square = pow.get_concrete_function(a=tf.TensorSpec(None, tf.float32), b=2)
print(square)
assert square(tf.constant(10.0)) == 100
with assert_raises(TypeError):
square(tf.constant(10.0), b=3)
###Output
Caught exception as expected : <class 'TypeError'>
###Markdown
Obtaining graphsEach concrete function is a callable wrapper around a `tf.Graph`. Although retrieving the actual `tf.Graph` object is not something you'll normally need to do, you can obtain it easily from any concrete function.
###Code
graph = double_strings.graph
for node in graph.as_graph_def().node:
print(f'{node.input} -> {node.name}')
###Output
[] -> a
['a', 'a'] -> add
['add'] -> Identity
###Markdown
DebuggingIn general, debugging code is easier in eager mode than inside `tf.function`. You should ensure that your code executes error-free in eager mode before decorating with `tf.function`. To assist in the debugging process, you can call `tf.config.run_functions_eagerly(True)` to globally disable and reenable `tf.function`.When tracking down issues that only appear within `tf.function`, here are some tips:- Plain old Python `print` calls only execute during tracing, helping you track down when your function gets (re)traced.- `tf.print` calls will execute every time, and can help you track down intermediate values during execution.- `tf.debugging.enable_check_numerics` is an easy way to track down where NaNs and Inf are created.- `pdb` (the [Python debugger](https://docs.python.org/3/library/pdb.html)) can help you understand what's going on during tracing. (Caveat: `pdb` will drop you into AutoGraph-transformed source code.) AutoGraph transformationsAutoGraph is a library that is on by default in `tf.function`, and transforms a subset of Python eager code into graph-compatible TensorFlow ops. This includes control flow like `if`, `for`, `while`.TensorFlow ops like `tf.cond` and `tf.while_loop` continue to work, but control flow is often easier to write and understand when written in Python.
###Code
# A simple loop
@tf.function
def f(x):
while tf.reduce_sum(x) > 1:
tf.print(x)
x = tf.tanh(x)
return x
f(tf.random.uniform([5]))
###Output
[0.968150139 0.94661057 0.659141541 0.243338108 0.324974418]
[0.747890294 0.738244772 0.57779181 0.238646239 0.313997835]
[0.633888662 0.628083587 0.521058679 0.234216645 0.304069668]
[0.560724 0.556731224 0.478516668 0.230025738 0.295032471]
[0.508514404 0.505548179 0.445054889 0.226052761 0.286760062]
[0.468786865 0.466469258 0.417825639 0.222279444 0.27915]
[0.437218577 0.435342133 0.395097047 0.218689546 0.272118181]
[0.411336243 0.409776062 0.375746042 0.215268672 0.265594691]
[0.389606655 0.388282478 0.359007448 0.212003931 0.259520918]
[0.371021092 0.36987862 0.34433946 0.208883777 0.253847361]
[0.354884475 0.353885531 0.331345946 0.205897823 0.248531789]
[0.340700179 0.339816928 0.319729656 0.203036711 0.243538037]
[0.32810235 0.32731393 0.309262425 0.200291961 0.238834769]
[0.316814631 0.316105157 0.299765944 0.197655901 0.234394819]
[0.306623846 0.305980951 0.291098356 0.195121482 0.230194479]
[0.297362536 0.296776444 0.283145398 0.192682371 0.226212859]
[0.288897097 0.288359851 0.275813699 0.190332711 0.2224316]
[0.281119376 0.280624509 0.26902616 0.188067153 0.21883443]
[0.273940742 0.273482978 0.262718409 0.18588081 0.21540685]
[0.26728785 0.26686275 0.256836385 0.183769137 0.212135911]
[0.261099339 0.260703176 0.251334101 0.181727976 0.20901]
[0.255323499 0.254953116 0.246172294 0.179753512 0.206018686]
[0.249916255 0.249568969 0.241317198 0.177842185 0.203152582]
[0.244839922 0.244513422 0.236739501 0.175990671 0.200403169]
[0.240061983 0.239754274 0.232413709 0.174195901 0.197762758]
[0.235554293 0.235263616 0.228317484 0.172455072 0.195224255]
[0.231292218 0.231017053 0.224431172 0.170765519 0.192781329]
[0.227254167 0.226993203 0.220737413 0.169124737 0.190428063]
[0.223421186 0.223173201 0.217220768 0.167530462 0.188159123]
[0.219776407 0.219540402 0.2138675 0.165980518 0.185969606]
###Markdown
If you're curious you can inspect the code autograph generates.
###Code
print(tf.autograph.to_code(f.python_function))
###Output
def tf__f(x):
with ag__.FunctionScope('f', 'fscope', ag__.ConversionOptions(recursive=True, user_requested=True, optional_features=(), internal_convert_user_code=True)) as fscope:
do_return = False
retval_ = ag__.UndefinedReturnValue()
def get_state():
return (x,)
def set_state(vars_):
nonlocal x
(x,) = vars_
def loop_body():
nonlocal x
ag__.converted_call(ag__.ld(tf).print, (ag__.ld(x),), None, fscope)
x = ag__.converted_call(ag__.ld(tf).tanh, (ag__.ld(x),), None, fscope)
def loop_test():
return (ag__.converted_call(ag__.ld(tf).reduce_sum, (ag__.ld(x),), None, fscope) > 1)
ag__.while_stmt(loop_test, loop_body, get_state, set_state, ('x',), {})
try:
do_return = True
retval_ = ag__.ld(x)
except:
do_return = False
raise
return fscope.ret(retval_, do_return)
###Markdown
ConditionalsAutoGraph will convert some `if ` statements into the equivalent `tf.cond` calls. This substitution is made if `` is a Tensor. Otherwise, the `if` statement is executed as a Python conditional.A Python conditional executes during tracing, so exactly one branch of the conditional will be added to the graph. Without AutoGraph, this traced graph would be unable to take the alternate branch if there is data-dependent control flow.`tf.cond` traces and adds both branches of the conditional to the graph, dynamically selecting a branch at execution time. Tracing can have unintended side effects; check out [AutoGraph tracing effects](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.mdeffects-of-the-tracing-process) for more information.
###Code
@tf.function
def fizzbuzz(n):
for i in tf.range(1, n + 1):
print('Tracing for loop')
if i % 15 == 0:
print('Tracing fizzbuzz branch')
tf.print('fizzbuzz')
elif i % 3 == 0:
print('Tracing fizz branch')
tf.print('fizz')
elif i % 5 == 0:
print('Tracing buzz branch')
tf.print('buzz')
else:
print('Tracing default branch')
tf.print(i)
fizzbuzz(tf.constant(5))
fizzbuzz(tf.constant(20))
###Output
Tracing for loop
Tracing fizzbuzz branch
Tracing fizz branch
Tracing buzz branch
Tracing default branch
1
2
fizz
4
buzz
1
2
fizz
4
buzz
fizz
7
8
fizz
buzz
11
fizz
13
14
fizzbuzz
16
17
fizz
19
buzz
###Markdown
See the [reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.mdif-statements) for additional restrictions on AutoGraph-converted if statements. LoopsAutoGraph will convert some `for` and `while` statements into the equivalent TensorFlow looping ops, like `tf.while_loop`. If not converted, the `for` or `while` loop is executed as a Python loop.This substitution is made in the following situations:- `for x in y`: if `y` is a Tensor, convert to `tf.while_loop`. In the special case where `y` is a `tf.data.Dataset`, a combination of `tf.data.Dataset` ops are generated.- `while `: if `` is a Tensor, convert to `tf.while_loop`.A Python loop executes during tracing, adding additional ops to the `tf.Graph` for every iteration of the loop.A TensorFlow loop traces the body of the loop, and dynamically selects how many iterations to run at execution time. The loop body only appears once in the generated `tf.Graph`.See the [reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.mdwhile-statements) for additional restrictions on AutoGraph-converted `for` and `while` statements. Looping over Python dataA common pitfall is to loop over Python/NumPy data within a `tf.function`. This loop will execute during the tracing process, adding a copy of your model to the `tf.Graph` for each iteration of the loop.If you want to wrap the entire training loop in `tf.function`, the safest way to do this is to wrap your data as a `tf.data.Dataset` so that AutoGraph will dynamically unroll the training loop.
###Code
def measure_graph_size(f, *args):
g = f.get_concrete_function(*args).graph
print("{}({}) contains {} nodes in its graph".format(
f.__name__, ', '.join(map(str, args)), len(g.as_graph_def().node)))
@tf.function
def train(dataset):
loss = tf.constant(0)
for x, y in dataset:
loss += tf.abs(y - x) # Some dummy computation.
return loss
small_data = [(1, 1)] * 3
big_data = [(1, 1)] * 10
measure_graph_size(train, small_data)
measure_graph_size(train, big_data)
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: small_data, (tf.int32, tf.int32)))
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: big_data, (tf.int32, tf.int32)))
###Output
train([(1, 1), (1, 1), (1, 1)]) contains 11 nodes in its graph
train([(1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1)]) contains 32 nodes in its graph
train(<FlatMapDataset shapes: (<unknown>, <unknown>), types: (tf.int32, tf.int32)>) contains 6 nodes in its graph
train(<FlatMapDataset shapes: (<unknown>, <unknown>), types: (tf.int32, tf.int32)>) contains 6 nodes in its graph
###Markdown
When wrapping Python/NumPy data in a Dataset, be mindful of `tf.data.Dataset.from_generator` versus ` tf.data.Dataset.from_tensors`. The former will keep the data in Python and fetch it via `tf.py_function` which can have performance implications, whereas the latter will bundle a copy of the data as one large `tf.constant()` node in the graph, which can have memory implications.Reading data from files via `TFRecordDataset`, `CsvDataset`, etc. is the most effective way to consume data, as then TensorFlow itself can manage the asynchronous loading and prefetching of data, without having to involve Python. To learn more, see the [`tf.data`: Build TensorFlow input pipelines](../../guide/data) guide. Accumulating values in a loopA common pattern is to accumulate intermediate values from a loop. Normally, this is accomplished by appending to a Python list or adding entries to a Python dictionary. However, as these are Python side effects, they will not work as expected in a dynamically unrolled loop. Use `tf.TensorArray` to accumulate results from a dynamically unrolled loop.
###Code
batch_size = 2
seq_len = 3
feature_size = 4
def rnn_step(inp, state):
return inp + state
@tf.function
def dynamic_rnn(rnn_step, input_data, initial_state):
# [batch, time, features] -> [time, batch, features]
input_data = tf.transpose(input_data, [1, 0, 2])
max_seq_len = input_data.shape[0]
states = tf.TensorArray(tf.float32, size=max_seq_len)
state = initial_state
for i in tf.range(max_seq_len):
state = rnn_step(input_data[i], state)
states = states.write(i, state)
return tf.transpose(states.stack(), [1, 0, 2])
dynamic_rnn(rnn_step,
tf.random.uniform([batch_size, seq_len, feature_size]),
tf.zeros([batch_size, feature_size]))
###Output
_____no_output_____
###Markdown
LimitationsTensorFlow `Function` has a few limitations by design that you should be aware of when converting a Python function to a `Function`. Executing Python side effectsSide effects, like printing, appending to lists, and mutating globals, can behave unexpectedly inside a `Function`, sometimes executing twice or not all. They only happen the first time you call a `Function` with a set of inputs. Afterwards, the traced `tf.Graph` is reexecuted, without executing the Python code.The general rule of thumb is to avoid relying on Python side effects in your logic and only use them to debug your traces. Otherwise, TensorFlow APIs like `tf.data`, `tf.print`, `tf.summary`, `tf.Variable.assign`, and `tf.TensorArray` are the best way to ensure your code will be executed by the TensorFlow runtime with each call.
###Code
@tf.function
def f(x):
print("Traced with", x)
tf.print("Executed with", x)
f(1)
f(1)
f(2)
###Output
_____no_output_____
###Markdown
If you would like to execute Python code during each invocation of a `Function`, `tf.py_function` is an exit hatch. The drawback of `tf.py_function` is that it's not portable or particularly performant, cannot be saved with SavedModel, and does not work well in distributed (multi-GPU, TPU) setups. Also, since `tf.py_function` has to be wired into the graph, it casts all inputs/outputs to tensors. Changing Python global and free variablesChanging Python global and [free variables](https://docs.python.org/3/reference/executionmodel.htmlbinding-of-names) counts as a Python side effect, so it only happens during tracing.
###Code
external_list = []
@tf.function
def side_effect(x):
print('Python side effect')
external_list.append(x)
side_effect(1)
side_effect(1)
side_effect(1)
# The list append only happened once!
assert len(external_list) == 1
###Output
_____no_output_____
###Markdown
You should avoid mutating containers like lists, dicts, other objects that live outside the `Function`. Instead, use arguments and TF objects. For example, the section ["Accumulating values in a loop"](accumulating_values_in_a_loop) has one example of how list-like operations can be implemented.You can, in some cases, capture and manipulate state if it is a [`tf.Variable`](https://www.tensorflow.org/guide/variable). This is how the weights of Keras models are updated with repeated calls to the same `ConcreteFunction`. Using Python iterators and generators Many Python features, such as generators and iterators, rely on the Python runtime to keep track of state. In general, while these constructs work as expected in eager mode, they are examples of Python side effects and therefore only happen during tracing.
###Code
@tf.function
def buggy_consume_next(iterator):
tf.print("Value:", next(iterator))
iterator = iter([1, 2, 3])
buggy_consume_next(iterator)
# This reuses the first value from the iterator, rather than consuming the next value.
buggy_consume_next(iterator)
buggy_consume_next(iterator)
###Output
_____no_output_____
###Markdown
Just like how TensorFlow has a specialized `tf.TensorArray` for list constructs, it has a specialized `tf.data.Iterator` for iteration constructs. See the section on [AutoGraph transformations](autograph_transformations) for an overview. Also, the [`tf.data`](https://www.tensorflow.org/guide/data) API can help implement generator patterns:
###Code
@tf.function
def good_consume_next(iterator):
# This is ok, iterator is a tf.data.Iterator
tf.print("Value:", next(iterator))
ds = tf.data.Dataset.from_tensor_slices([1, 2, 3])
iterator = iter(ds)
good_consume_next(iterator)
good_consume_next(iterator)
good_consume_next(iterator)
###Output
_____no_output_____
###Markdown
Deleting tf.Variables between `Function` callsAnother error you may encounter is a garbage-collected variable. `ConcreteFunction`s only retain [WeakRefs](https://docs.python.org/3/library/weakref.html) to the variables they close over, so you must retain a reference to any variables.
###Code
external_var = tf.Variable(3)
@tf.function
def f(x):
return x * external_var
traced_f = f.get_concrete_function(4)
print("Calling concrete function...")
print(traced_f(4))
# The original variable object gets garbage collected, since there are no more
# references to it.
external_var = tf.Variable(4)
print()
print("Calling concrete function after garbage collecting its closed Variable...")
with assert_raises(tf.errors.FailedPreconditionError):
traced_f(4)
###Output
_____no_output_____
###Markdown
All outputs of a tf.function must be return valuesWith the exception of `tf.Variable`s, a tf.function must return all itsoutputs. Attempting to directly access any tensors from a function withoutgoing through return values causes "leaks".For example, the function below "leaks" the tensor `a` through the Pythonglobal `x`:
###Code
x = None
@tf.function
def leaky_function(a):
global x
x = a + 1 # Bad - leaks local tensor
return a + 2
correct_a = leaky_function(tf.constant(1))
print(correct_a.numpy()) # Good - value obtained from function's returns
with assert_raises(AttributeError):
x.numpy() # Bad - tensor leaked from inside the function, cannot be used here
print(x)
###Output
3
Caught exception as expected : <class 'AttributeError'>
Tensor("add:0", shape=(), dtype=int32)
###Markdown
This is true even if the leaked value is also returned:
###Code
@tf.function
def leaky_function(a):
global x
x = a + 1 # Bad - leaks local tensor
return x # Good - uses local tensor
correct_a = leaky_function(tf.constant(1))
print(correct_a.numpy()) # Good - value obtained from function's returns
with assert_raises(AttributeError):
x.numpy() # Bad - tensor leaked from inside the function, cannot be used here
print(x)
@tf.function
def captures_leaked_tensor(b):
b += x # Bad - `x` is leaked from `leaky_function`
return b
with assert_raises(TypeError):
captures_leaked_tensor(tf.constant(2))
###Output
2
Caught exception as expected : <class 'AttributeError'>
Tensor("add:0", shape=(), dtype=int32)
Caught exception as expected : <class 'TypeError'>
###Markdown
Usually, leaks such as these occur when you use Python statements or data structures.In addition to leaking inaccessible tensors, such statements are also likely wrong because they count as Python side effects, and are not guaranteed to execute at every function call.Common ways to leak local tensors also include mutating an external Python collection, or an object:
###Code
class MyClass:
def __init__(self):
self.field = None
external_list = []
external_object = MyClass()
def leaky_function():
a = tf.constant(1)
external_list.append(a) # Bad - leaks tensor
external_object.field = a # Bad - leaks tensor
###Output
_____no_output_____
###Markdown
Known IssuesIf your `Function` is not evaluating correctly, the error may be explained by these known issues which are planned to be fixed in the future. Depending on Python global and free variables`Function` creates a new `ConcreteFunction` when called with a new value of a Python argument. However, it does not do that for the Python closure, globals, or nonlocals of that `Function`. If their value changes in between calls to the `Function`, the `Function` will still use the values they had when it was traced. This is different from how regular Python functions work.For that reason, we recommend a functional programming style that uses arguments instead of closing over outer names.
###Code
@tf.function
def buggy_add():
return 1 + foo
@tf.function
def recommended_add(foo):
return 1 + foo
foo = 1
print("Buggy:", buggy_add())
print("Correct:", recommended_add(foo))
print("Updating the value of `foo` to 100!")
foo = 100
print("Buggy:", buggy_add()) # Did not change!
print("Correct:", recommended_add(foo))
###Output
_____no_output_____
###Markdown
You can close over outer names, as long as you don't update their values. Depending on Python objects The recommendation to pass Python objects as arguments into `tf.function` has a number of known issues, that are expected to be fixed in the future. In general, you can rely on consistent tracing if you use a Python primitive or `tf.nest`-compatible structure as an argument or pass in a *different* instance of an object into a `Function`. However, `Function` will *not* create a new trace when you pass **the same object and only change its attributes**.
###Code
class SimpleModel(tf.Module):
def __init__(self):
# These values are *not* tf.Variables.
self.bias = 0.
self.weight = 2.
@tf.function
def evaluate(model, x):
return model.weight * x + model.bias
simple_model = SimpleModel()
x = tf.constant(10.)
print(evaluate(simple_model, x))
print("Adding bias!")
simple_model.bias += 5.0
print(evaluate(simple_model, x)) # Didn't change :(
###Output
_____no_output_____
###Markdown
Using the same `Function` to evaluate the updated instance of the model will be buggy since the updated model has the [same cache key](rules_of_tracing) as the original model.For that reason, we recommend that you write your `Function` to avoid depending on mutable object attributes or create new objects.If that is not possible, one workaround is to make new `Function`s each time you modify your object to force retracing:
###Code
def evaluate(model, x):
return model.weight * x + model.bias
new_model = SimpleModel()
evaluate_no_bias = tf.function(evaluate).get_concrete_function(new_model, x)
# Don't pass in `new_model`, `Function` already captured its state during tracing.
print(evaluate_no_bias(x))
print("Adding bias!")
new_model.bias += 5.0
# Create new Function and ConcreteFunction since you modified new_model.
evaluate_with_bias = tf.function(evaluate).get_concrete_function(new_model, x)
print(evaluate_with_bias(x)) # Don't pass in `new_model`.
###Output
_____no_output_____
###Markdown
As [retracing can be expensive](https://www.tensorflow.org/guide/intro_to_graphstracing_and_performance), you can use `tf.Variable`s as object attributes, which can be mutated (but not changed, careful!) for a similar effect without needing a retrace.
###Code
class BetterModel:
def __init__(self):
self.bias = tf.Variable(0.)
self.weight = tf.Variable(2.)
@tf.function
def evaluate(model, x):
return model.weight * x + model.bias
better_model = BetterModel()
print(evaluate(better_model, x))
print("Adding bias!")
better_model.bias.assign_add(5.0) # Note: instead of better_model.bias += 5
print(evaluate(better_model, x)) # This works!
###Output
_____no_output_____
###Markdown
Creating tf.Variables`Function` only supports singleton `tf.Variable`s created once on the first call, and reused across subsequent function calls. The code snippet below would create a new `tf.Variable` in every function call, which results in a `ValueError` exception.Example:
###Code
@tf.function
def f(x):
v = tf.Variable(1.0)
return v
with assert_raises(ValueError):
f(1.0)
###Output
_____no_output_____
###Markdown
A common pattern used to work around this limitation is to start with a Python None value, then conditionally create the `tf.Variable` if the value is None:
###Code
class Count(tf.Module):
def __init__(self):
self.count = None
@tf.function
def __call__(self):
if self.count is None:
self.count = tf.Variable(0)
return self.count.assign_add(1)
c = Count()
print(c())
print(c())
###Output
_____no_output_____
###Markdown
Using with multiple Keras optimizersYou may encounter `ValueError: tf.function only supports singleton tf.Variables created on the first call.` when using more than one Keras optimizer with a `tf.function`. This error occurs because optimizers internally create `tf.Variables` when they apply gradients for the first time.
###Code
opt1 = tf.keras.optimizers.Adam(learning_rate = 1e-2)
opt2 = tf.keras.optimizers.Adam(learning_rate = 1e-3)
@tf.function
def train_step(w, x, y, optimizer):
with tf.GradientTape() as tape:
L = tf.reduce_sum(tf.square(w*x - y))
gradients = tape.gradient(L, [w])
optimizer.apply_gradients(zip(gradients, [w]))
w = tf.Variable(2.)
x = tf.constant([-1.])
y = tf.constant([2.])
train_step(w, x, y, opt1)
print("Calling `train_step` with different optimizer...")
with assert_raises(ValueError):
train_step(w, x, y, opt2)
###Output
_____no_output_____
###Markdown
If you need to change the optimizer during training, a workaround is to create a new `Function` for each optimizer, calling the [`ConcreteFunction`](obtaining_concrete_functions) directly.
###Code
opt1 = tf.keras.optimizers.Adam(learning_rate = 1e-2)
opt2 = tf.keras.optimizers.Adam(learning_rate = 1e-3)
# Not a tf.function.
def train_step(w, x, y, optimizer):
with tf.GradientTape() as tape:
L = tf.reduce_sum(tf.square(w*x - y))
gradients = tape.gradient(L, [w])
optimizer.apply_gradients(zip(gradients, [w]))
w = tf.Variable(2.)
x = tf.constant([-1.])
y = tf.constant([2.])
# Make a new Function and ConcreteFunction for each optimizer.
train_step_1 = tf.function(train_step).get_concrete_function(w, x, y, opt1)
train_step_2 = tf.function(train_step).get_concrete_function(w, x, y, opt2)
for i in range(10):
if i % 2 == 0:
train_step_1(w, x, y) # `opt1` is not used as a parameter.
else:
train_step_2(w, x, y) # `opt2` is not used as a parameter.
###Output
_____no_output_____ |
inst/apps/vignette.ipynb | ###Markdown
Managing Background Shiny Apps This notebook provides an overview of `shinybg`'s app management functionality.
###Code
library(shiny)
library(shinybg)
###Output
_____no_output_____
###Markdown
RegistrationWhenever a Shiny app is launched using `renderShinyApp()` or `runBackgroundApp()` it is registered with `shinybg`'s app manager.
###Code
app1 <- renderShinyApp(
appFile = system.file("apps/histogram-app.R", package = "shinybg"),
port = 3000
)
###Output
_____no_output_____
###Markdown
You can Verify the app is registered and running:
###Code
list_apps()
###Output
_____no_output_____
###Markdown
Let's start another instance of the same app on a different port.
###Code
app2 <- renderShinyApp(
appFile = system.file("apps/histogram-app.R", package = "shinybg"),
port = 3001
)
###Output
_____no_output_____
###Markdown
Now verify boths apps appear in the manager:
###Code
list_apps()
###Output
_____no_output_____
###Markdown
ManagementYou can use the app manager to kill any of your background Shiny apps:
###Code
kill_app(3000)
###Output
_____no_output_____
###Markdown
If you scroll up to the app's cell you'll see it is now greyed out, indicating it's been terminated. If you attempt to launch an app on a port already in use the app manager will kill the existing app before starting the new one:
###Code
system.file("apps/sever-info-app.R", package = "shinybg")
app3 <- renderShinyApp(
appFile = system.file("apps/sever-info-app.R", package = "shinybg"),
port = 3001
)
###Output
_____no_output_____
###Markdown
CleanupFinally, we can use the app manager to kill all running apps.
###Code
kill_all_apps()
###Output
_____no_output_____ |
fastAI/deeplearning1/nbs/mnist.ipynb | ###Markdown
Linear model
###Code
def get_lin_model():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Flatten(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
lm = get_lin_model()
gen = image.ImageDataGenerator()
batches = gen.flow(train_data, train_label, batch_size=BATCH_SIZE)
val_batches = gen.flow(valid_data, valid_label, batch_size=BATCH_SIZE)
lm.fit_generator(
batches,
samples_per_epoch=len(train_data) / 20,
nb_epoch=1,
validation_data=val_batches,
nb_val_samples=len(valid_data) / 20
)
lm.optimizer.lr=0.1
lm.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
lm.optimizer.lr=0.01
lm.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
###Output
Epoch 1/4
60000/60000 [==============================] - 5s - loss: 0.2710 - acc: 0.9241 - val_loss: 0.2858 - val_acc: 0.9216
Epoch 2/4
60000/60000 [==============================] - 5s - loss: 0.2667 - acc: 0.9249 - val_loss: 0.2764 - val_acc: 0.9242
Epoch 3/4
60000/60000 [==============================] - 4s - loss: 0.2707 - acc: 0.9249 - val_loss: 0.2759 - val_acc: 0.9219
Epoch 4/4
60000/60000 [==============================] - 4s - loss: 0.2603 - acc: 0.9267 - val_loss: 0.2810 - val_acc: 0.9240
###Markdown
Single dense layer
###Code
def get_fc_model():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Flatten(),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
fc = get_fc_model()
len(X_train)
BATCH_SIZE = 20
gen = image.ImageDataGenerator()
batches = gen.flow(X_train, y_train, batch_size=BATCH_SIZE)
test_batches = gen.flow(X_test, y_test, batch_size=BATCH_SIZE)
fc.fit_generator(
batches,
samples_per_epoch=len(X_train) / BATCH_SIZE,
nb_epoch=1,
validation_data=test_batches,
nb_val_samples=len(X_test) / BATCH_SIZE
)
fc.optimizer.lr=0.1
fc.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
fc.optimizer.lr=0.01
fc.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
###Output
Epoch 1/4
60000/60000 [==============================] - 5s - loss: 0.2549 - acc: 0.9431 - val_loss: 0.2797 - val_acc: 0.9341
Epoch 2/4
60000/60000 [==============================] - 5s - loss: 0.2408 - acc: 0.9457 - val_loss: 0.2753 - val_acc: 0.9341
Epoch 3/4
60000/60000 [==============================] - 5s - loss: 0.2358 - acc: 0.9453 - val_loss: 0.2733 - val_acc: 0.9339
Epoch 4/4
60000/60000 [==============================] - 5s - loss: 0.2252 - acc: 0.9474 - val_loss: 0.2670 - val_acc: 0.9397
###Markdown
Basic 'VGG-style' CNN
###Code
def get_model():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
Convolution2D(64,3,3, activation='relu'),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model()
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=8,
validation_data=test_batches, nb_val_samples=test_batches.N)
###Output
Epoch 1/8
60000/60000 [==============================] - 6s - loss: 0.0232 - acc: 0.9929 - val_loss: 0.0207 - val_acc: 0.9935
Epoch 2/8
60000/60000 [==============================] - 6s - loss: 0.0193 - acc: 0.9935 - val_loss: 0.0252 - val_acc: 0.9919
Epoch 3/8
60000/60000 [==============================] - 6s - loss: 0.0155 - acc: 0.9949 - val_loss: 0.0298 - val_acc: 0.9919
Epoch 4/8
60000/60000 [==============================] - 6s - loss: 0.0133 - acc: 0.9958 - val_loss: 0.0313 - val_acc: 0.9913
Epoch 5/8
60000/60000 [==============================] - 6s - loss: 0.0095 - acc: 0.9970 - val_loss: 0.0327 - val_acc: 0.9913
Epoch 6/8
60000/60000 [==============================] - 6s - loss: 0.0107 - acc: 0.9966 - val_loss: 0.0301 - val_acc: 0.9906
Epoch 7/8
60000/60000 [==============================] - 7s - loss: 0.0070 - acc: 0.9979 - val_loss: 0.0269 - val_acc: 0.9938
Epoch 8/8
60000/60000 [==============================] - 6s - loss: 0.0082 - acc: 0.9975 - val_loss: 0.0261 - val_acc: 0.9926
###Markdown
Data augmentation
###Code
model = get_model()
gen = image.ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,
height_shift_range=0.08, zoom_range=0.08)
batches = gen.flow(X_train, y_train, batch_size=64)
test_batches = gen.flow(X_test, y_test, batch_size=64)
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=8,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=14,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.0001
model.fit_generator(batches, batches.N, nb_epoch=10,
validation_data=test_batches, nb_val_samples=test_batches.N)
###Output
Epoch 1/10
60000/60000 [==============================] - 7s - loss: 0.0191 - acc: 0.9942 - val_loss: 0.0277 - val_acc: 0.9906
Epoch 2/10
60000/60000 [==============================] - 7s - loss: 0.0196 - acc: 0.9938 - val_loss: 0.0192 - val_acc: 0.9945
Epoch 3/10
60000/60000 [==============================] - 6s - loss: 0.0173 - acc: 0.9946 - val_loss: 0.0258 - val_acc: 0.9924
Epoch 4/10
60000/60000 [==============================] - 7s - loss: 0.0189 - acc: 0.9943 - val_loss: 0.0249 - val_acc: 0.9924
Epoch 5/10
60000/60000 [==============================] - 7s - loss: 0.0166 - acc: 0.9951 - val_loss: 0.0271 - val_acc: 0.9920
Epoch 6/10
60000/60000 [==============================] - 7s - loss: 0.0183 - acc: 0.9942 - val_loss: 0.0229 - val_acc: 0.9937
Epoch 7/10
60000/60000 [==============================] - 7s - loss: 0.0177 - acc: 0.9944 - val_loss: 0.0275 - val_acc: 0.9924
Epoch 8/10
60000/60000 [==============================] - 6s - loss: 0.0168 - acc: 0.9946 - val_loss: 0.0246 - val_acc: 0.9926
Epoch 9/10
60000/60000 [==============================] - 7s - loss: 0.0169 - acc: 0.9943 - val_loss: 0.0215 - val_acc: 0.9936
Epoch 10/10
60000/60000 [==============================] - 7s - loss: 0.0160 - acc: 0.9953 - val_loss: 0.0267 - val_acc: 0.9919
###Markdown
Batchnorm + data augmentation
###Code
def get_model_bn():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model_bn()
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=12,
validation_data=test_batches, nb_val_samples=test_batches.N)
###Output
Epoch 1/12
60000/60000 [==============================] - 13s - loss: 0.0166 - acc: 0.9947 - val_loss: 0.0205 - val_acc: 0.9933
Epoch 2/12
60000/60000 [==============================] - 13s - loss: 0.0168 - acc: 0.9950 - val_loss: 0.0194 - val_acc: 0.9942
Epoch 3/12
60000/60000 [==============================] - 12s - loss: 0.0151 - acc: 0.9953 - val_loss: 0.0197 - val_acc: 0.9942
Epoch 4/12
60000/60000 [==============================] - 13s - loss: 0.0135 - acc: 0.9954 - val_loss: 0.0179 - val_acc: 0.9938
Epoch 5/12
60000/60000 [==============================] - 12s - loss: 0.0143 - acc: 0.9953 - val_loss: 0.0257 - val_acc: 0.9925
Epoch 6/12
60000/60000 [==============================] - 12s - loss: 0.0139 - acc: 0.9954 - val_loss: 0.0150 - val_acc: 0.9949
Epoch 7/12
60000/60000 [==============================] - 13s - loss: 0.0127 - acc: 0.9958 - val_loss: 0.0218 - val_acc: 0.9932
Epoch 8/12
60000/60000 [==============================] - 13s - loss: 0.0121 - acc: 0.9962 - val_loss: 0.0264 - val_acc: 0.9917
Epoch 9/12
60000/60000 [==============================] - 13s - loss: 0.0120 - acc: 0.9960 - val_loss: 0.0209 - val_acc: 0.9935
Epoch 10/12
60000/60000 [==============================] - 13s - loss: 0.0130 - acc: 0.9957 - val_loss: 0.0171 - val_acc: 0.9948
Epoch 11/12
60000/60000 [==============================] - 13s - loss: 0.0132 - acc: 0.9958 - val_loss: 0.0227 - val_acc: 0.9932
Epoch 12/12
60000/60000 [==============================] - 12s - loss: 0.0115 - acc: 0.9964 - val_loss: 0.0172 - val_acc: 0.9945
###Markdown
Batchnorm + dropout + data augmentation
###Code
def get_model_bn_do():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model_bn_do()
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
###Output
Epoch 1/1
60000/60000 [==============================] - 13s - loss: 0.0186 - acc: 0.9942 - val_loss: 0.0193 - val_acc: 0.9945
###Markdown
Ensembling
###Code
def fit_model():
model = get_model_bn_do()
model.fit_generator(batches, batches.N, nb_epoch=1, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=18, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
return model
models = [fit_model() for i in range(6)]
path = "data/mnist/"
model_path = path + 'models/'
for i,m in enumerate(models):
m.save_weights(model_path+'cnn-mnist23-'+str(i)+'.pkl')
evals = np.array([m.evaluate(X_test, y_test, batch_size=256) for m in models])
evals.mean(axis=0)
all_preds = np.stack([m.predict(X_test, batch_size=256) for m in models])
all_preds.shape
avg_preds = all_preds.mean(axis=0)
keras.metrics.categorical_accuracy(y_test, avg_preds).eval()
###Output
_____no_output_____ |
content/courses/deeplearning/notebooks/pytorch/Time_Series_Prediction_with_LSTM_Using_PyTorch.ipynb | ###Markdown
Time Series Prediction with LSTM Using PyTorchThis kernel is based on *datasets* from[Time Series Forecasting with the Long Short-Term Memory Network in Python](https://machinelearningmastery.com/time-series-forecasting-long-short-term-memory-network-python/)[Time Series Prediction with LSTM Recurrent Neural Networks in Python with Keras](https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/) Download Dataset
###Code
#!wget https://raw.githubusercontent.com/jbrownlee/Datasets/master/shampoo.csv
!wget https://raw.githubusercontent.com/jbrownlee/Datasets/master/airline-passengers.csv
###Output
--2020-12-14 18:37:44-- https://raw.githubusercontent.com/jbrownlee/Datasets/master/airline-passengers.csv
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2180 (2.1K) [text/plain]
Saving to: ‘airline-passengers.csv.3’
airline-passengers. 0%[ ] 0 --.-KB/s
airline-passengers. 100%[===================>] 2.13K --.-KB/s in 0s
2020-12-14 18:37:44 (60.7 MB/s) - ‘airline-passengers.csv.3’ saved [2180/2180]
###Markdown
Library
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import torch
import torch.nn as nn
from torch.autograd import Variable
from sklearn.preprocessing import MinMaxScaler
###Output
_____no_output_____
###Markdown
Data Plot
###Code
training_set = pd.read_csv('airline-passengers.csv')
#training_set = pd.read_csv('shampoo.csv')
training_set = training_set.iloc[:,1:2].values
#plt.plot(training_set, label = 'Shampoo Sales Data')
plt.plot(training_set, label = 'Airline Passangers Data')
plt.show()
###Output
_____no_output_____
###Markdown
Dataloading
###Code
def sliding_windows(data, seq_length):
x = []
y = []
for i in range(len(data)-seq_length-1):
_x = data[i:(i+seq_length)]
_y = data[i+seq_length]
x.append(_x)
y.append(_y)
return np.array(x),np.array(y)
sc = MinMaxScaler()
training_data = sc.fit_transform(training_set)
seq_length = 4
x, y = sliding_windows(training_data, seq_length)
train_size = int(len(y) * 0.67)
test_size = len(y) - train_size
dataX = Variable(torch.Tensor(np.array(x)))
dataY = Variable(torch.Tensor(np.array(y)))
trainX = Variable(torch.Tensor(np.array(x[0:train_size])))
trainY = Variable(torch.Tensor(np.array(y[0:train_size])))
testX = Variable(torch.Tensor(np.array(x[train_size:len(x)])))
testY = Variable(torch.Tensor(np.array(y[train_size:len(y)])))
###Output
_____no_output_____
###Markdown
Model
###Code
class LSTM(nn.Module):
def __init__(self, num_classes, input_size, hidden_size, num_layers):
super(LSTM, self).__init__()
self.num_classes = num_classes
self.num_layers = num_layers
self.input_size = input_size
self.hidden_size = hidden_size
self.seq_length = seq_length
self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_size,
num_layers=num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, x):
h_0 = Variable(torch.zeros(
self.num_layers, x.size(0), self.hidden_size))
c_0 = Variable(torch.zeros(
self.num_layers, x.size(0), self.hidden_size))
# Propagate input through LSTM
ula, (h_out, _) = self.lstm(x, (h_0, c_0))
h_out = h_out.view(-1, self.hidden_size)
out = self.fc(h_out)
return out
###Output
_____no_output_____
###Markdown
Training
###Code
num_epochs = 2000
learning_rate = 0.01
input_size = 1
hidden_size = 2
num_layers = 1
num_classes = 1
lstm = LSTM(num_classes, input_size, hidden_size, num_layers)
criterion = torch.nn.MSELoss() # mean-squared error for regression
optimizer = torch.optim.Adam(lstm.parameters(), lr=learning_rate)
#optimizer = torch.optim.SGD(lstm.parameters(), lr=learning_rate)
# Train the model
for epoch in range(num_epochs):
outputs = lstm(trainX)
optimizer.zero_grad()
# obtain the loss function
loss = criterion(outputs, trainY)
loss.backward()
optimizer.step()
if epoch % 100 == 0:
print("Epoch: %d, loss: %1.5f" % (epoch, loss.item()))
###Output
Epoch: 0, loss: 0.98159
Epoch: 100, loss: 0.01017
Epoch: 200, loss: 0.00424
Epoch: 300, loss: 0.00279
Epoch: 400, loss: 0.00272
Epoch: 500, loss: 0.00266
Epoch: 600, loss: 0.00259
Epoch: 700, loss: 0.00252
Epoch: 800, loss: 0.00245
Epoch: 900, loss: 0.00238
Epoch: 1000, loss: 0.00232
Epoch: 1100, loss: 0.00226
Epoch: 1200, loss: 0.00220
Epoch: 1300, loss: 0.00215
Epoch: 1400, loss: 0.00210
Epoch: 1500, loss: 0.00205
Epoch: 1600, loss: 0.00200
Epoch: 1700, loss: 0.00196
Epoch: 1800, loss: 0.00192
Epoch: 1900, loss: 0.00188
###Markdown
Testing for Airplane Passengers Dataset
###Code
lstm.eval()
train_predict = lstm(dataX)
data_predict = train_predict.data.numpy()
dataY_plot = dataY.data.numpy()
data_predict = sc.inverse_transform(data_predict)
dataY_plot = sc.inverse_transform(dataY_plot)
plt.axvline(x=train_size, c='r', linestyle='--')
plt.plot(dataY_plot)
plt.plot(data_predict)
plt.suptitle('Time-Series Prediction')
plt.show()
###Output
_____no_output_____ |
Tutorials/03_Hartree-Fock/3c_unrestricted-hartree-fock.ipynb | ###Markdown
Unrestricted Open-Shell Hartree-FockIn the first two tutorials in this module, we wrote programs which implement a closed-shell formulation of Hartree-Fock theory using restricted orbitals, aptly named Restricted Hartree-Fock (RHF). In this tutorial, we will abandon strictly closed-shell systems and the notion of restricted orbitals, in favor of a more general theory known as Unrestricted Hartree-Fock (UHF) which can accommodate more diverse molecules. In UHF, the orbitals occupied by spin up ($\alpha$) electrons and those occupied by spin down ($\beta$) electrons no longer have the same spatial component, e.g., $$\chi_i({\bf x}) = \begin{cases}\psi^{\alpha}_j({\bf r})\alpha(\omega) \\ \psi^{\beta}_j({\bf r})\beta(\omega)\end{cases},$$meaning that they will not have the same orbital energy. This relaxation of orbital constraints allows for more variational flexibility, which leads to UHF always being able to find a lower total energy solution than RHF. I. Theoretical OverviewIn UHF, we seek to solve the coupled equations\begin{align}{\bf F}^{\alpha}{\bf C}^{\alpha} &= {\bf SC}^{\alpha}{\bf\epsilon}^{\alpha} \\{\bf F}^{\beta}{\bf C}^{\beta} &= {\bf SC}^{\beta}{\bf\epsilon}^{\beta},\end{align}which are the unrestricted generalizations of the restricted Roothan equations, called the Pople-Nesbitt equations. Here, the one-electron Fock matrices are given by\begin{align}F_{\mu\nu}^{\alpha} &= H_{\mu\nu} + (\mu\,\nu\mid\lambda\,\sigma)[D_{\lambda\sigma}^{\alpha} + D_{\lambda\sigma}^{\beta}] - (\mu\,\lambda\,\mid\nu\,\sigma)D_{\lambda\sigma}^{\beta}\\F_{\mu\nu}^{\beta} &= H_{\mu\nu} + (\mu\,\nu\mid\,\lambda\,\sigma)[D_{\lambda\sigma}^{\alpha} + D_{\lambda\sigma}^{\beta}] - (\mu\,\lambda\,\mid\nu\,\sigma)D_{\lambda\sigma}^{\alpha},\end{align}where the density matrices $D_{\lambda\sigma}^{\alpha}$ and $D_{\lambda\sigma}^{\beta}$ are given by\begin{align}D_{\lambda\sigma}^{\alpha} &= C_{\sigma i}^{\alpha}C_{\lambda i}^{\alpha}\\D_{\lambda\sigma}^{\beta} &= C_{\sigma i}^{\beta}C_{\lambda i}^{\beta}.\end{align}Unlike for RHF, the orbital coefficient matrices ${\bf C}^{\alpha}$ and ${\bf C}^{\beta}$ are of dimension $M\times N^{\alpha}$ and $M\times N^{\beta}$, where $M$ is the number of AO basis functions and $N^{\alpha}$ ($N^{\beta}$) is the number of $\alpha$ ($\beta$) electrons. The total UHF energy is given by\begin{align}E^{\rm UHF}_{\rm total} &= E^{\rm UHF}_{\rm elec} + E^{\rm BO}_{\rm nuc},\;\;{\rm with}\\E^{\rm UHF}_{\rm elec} &= \frac{1}{2}[({\bf D}^{\alpha} + {\bf D}^{\beta}){\bf H} + {\bf D}^{\alpha}{\bf F}^{\alpha} + {\bf D}^{\beta}{\bf F}^{\beta}].\end{align} II. ImplementationIn any SCF program, there will be several common elements which can be abstracted from the program itself into separate modules, classes, or functions to 'clean up' the code that will need to be written explicitly; examples of this concept can be seen throughout the Psi4NumPy reference implementations. For the purposes of this tutorial, we can achieve some degree of code cleanup without sacrificing readabilitiy and clarity by focusing on abstracting only the parts of the code which are both - Lengthy subroutines, and - Used repeatedly. In our UHF program, let's use what we've learned in the last tutorial by also implementing DIIS convergence accelleration for our SCF iterations. With this in mind, two subroutines in particular would benefit from abstraction are1. Orthogonalize & diagonalize Fock matrix2. Extrapolate previous trial vectors for new DIIS solution vectorBefore we start writing our UHF program, let's try to write functions which can perform the above tasks so that we can use them in our implementation of UHF. Recall that defining functions in Python has the following syntax:~~~pythondef function_name(*args **kwargs): function block return return_values~~~A thorough discussion of defining functions in Python can be found [here](https://docs.python.org/2/tutorial/controlflow.htmldefining-functions "Go to Python docs"). First, let's write a function which can diagonalize the Fock matrix and return the orbital coefficient matrix **C** and the density matrix **D**. From our RHF tutorial, this subroutine is executed with:~~~pythonF_p = A.dot(F).dot(A)e, C_p = np.linalg.eigh(F_p)C = A.dot(C_p)C_occ = C[:, :ndocc]D = np.einsum('pi,qi->pq', C_occ, C_occ)~~~Examining this code block, there are three quantities which must be specified beforehand:- Fock matrix, **F**- Orthogonalization matrix, ${\bf A} = {\bf S}^{-1/2}$- Number of doubly occupied orbitals, `ndocc`However, since the orthogonalization matrix **A** is a static quantity (only built once, then left alone) we may choose to leave **A** as a *global* quantity, instead of an argument to our function. In the cell below, using the code snippet given above, write a function `diag_F()` which takes **F** and the number of orbitals `norb` as arguments, and returns **C** and **D**:
###Code
# ==> Define function to diagonalize F <==
def diag_F(F, norb):
F_p = A.dot(F).dot(A)
e, C_p = np.linalg.eigh(F_p)
C = A.dot(C_p)
C_occ = C[:, :norb]
D = np.einsum('pi,qi->pq', C_occ, C_occ)
return (C, D)
###Output
_____no_output_____
###Markdown
Next, let's write a function to perform DIIS extrapolation and generate a new solution vector. Recall that the DIIS-accellerated SCF algorithm is: Algorithm 1: DIIS within a generic SCF Iteration1. Compute **F**, append to list of previous trial vectors2. Compute AO orbital gradient **r**, append to list of previous residual vectors3. Compute RHF energy3. Check convergence criteria - If RMSD of **r** sufficiently small, and - If change in SCF energy sufficiently small, break4. Build **B** matrix from previous AO gradient vectors5. Solve Pulay equation for coefficients $\{c_i\}$6. Compute DIIS solution vector **F_DIIS** from $\{c_i\}$ and previous trial vectors7. Compute new orbital guess with **F_DIIS**In our function, we will perform steps 4-6 of the above algorithm. What information will we need to provide our function in order to do so? To build **B** (step 4 above) in the previous tutorial, we used:~~~python Build B matrixB_dim = len(F_list) + 1B = np.empty((B_dim, B_dim))B[-1, :] = -1B[:, -1] = -1B[-1, -1] = 0for i in xrange(len(F_list)): for j in xrange(len(F_list)): B[i, j] = np.einsum('ij,ij->', DIIS_RESID[i], DIIS_RESID[j])~~~Here, we see that we must have all previous DIIS residual vectors (`DIIS_RESID`), as well as knowledge about how many previous trial vectors there are (for the dimension of **B**). To solve the Pulay equation (step 5 above):~~~python Build RHS of Pulay equation rhs = np.zeros((B_dim))rhs[-1] = -1 Solve Pulay equation for c_i's with NumPycoeff = np.linalg.solve(B, rhs)~~~For this step, we only need the dimension of **B** (which we computed in step 4 above) and a NumPy routine, so this step doesn't require any additional arguments. Finally, to build the DIIS Fock matrix (step 6):~~~python Build DIIS Fock matrixF = np.zeros_like(F_list[0])for x in xrange(coeff.shape[0] - 1): F += coeff[x] * F_list[x]~~~Clearly, for this step, we need to know all the previous trial vectors (`F_list`) and the coefficients we generated in the previous step. In the cell below, write a funciton `diis_xtrap()` according to Algorithm 1 steps 4-6, using the above code snippets, which takes a list of previous trial vectors `F_list` and residual vectors `DIIS_RESID` as arguments and returns the new DIIS solution vector `F_DIIS`:
###Code
# ==> Build DIIS Extrapolation Function <==
def diis_xtrap(F_list, DIIS_RESID):
# Build B matrix
B_dim = len(F_list) + 1
B = np.empty((B_dim, B_dim))
B[-1, :] = -1
B[:, -1] = -1
B[-1, -1] = 0
for i in range(len(F_list)):
for j in range(len(F_list)):
B[i, j] = np.einsum('ij,ij->', DIIS_RESID[i], DIIS_RESID[j])
# Build RHS of Pulay equation
rhs = np.zeros((B_dim))
rhs[-1] = -1
# Solve Pulay equation for c_i's with NumPy
coeff = np.linalg.solve(B, rhs)
# Build DIIS Fock matrix
F_DIIS = np.zeros_like(F_list[0])
for x in range(coeff.shape[0] - 1):
F_DIIS += coeff[x] * F_list[x]
return F_DIIS
###Output
_____no_output_____
###Markdown
We are now ready to begin writing our UHF program! Let's begin by importing Psi4 and NumPy, and defining our molecule & basic options:
###Code
# ==> Import Psi4 & NumPy <==
import psi4
import numpy as np
# ==> Set Basic Psi4 Options <==
# Memory specification
psi4.set_memory(int(5e8))
numpy_memory = 2
# Set output file
psi4.core.set_output_file('output.dat', False)
# Define Physicist's water -- don't forget C1 symmetry!
mol = psi4.geometry("""
O
H 1 1.1
H 1 1.1 2 104
symmetry c1
""")
# Set computation options
psi4.set_options({'guess': 'core',
'basis': 'cc-pvdz',
'scf_type': 'pk',
'e_convergence': 1e-8,
'reference': 'uhf'})
###Output
_____no_output_____
###Markdown
You may notice that in the above `psi4.set_options()` block, there are two additional options -- namely, `'guess': 'core'` and `'reference': 'uhf'`. These options make sure that when we ultimately check our program against Psi4, the options Psi4 uses are identical to our implementation. Next, let's define the options for our UHF program; we can borrow these options from our RHF implementation with DIIS accelleration that we completed in our last tutorial.
###Code
# ==> Set default program options <==
# Maximum SCF iterations
MAXITER = 40
# Energy convergence criterion
E_conv = 1.0e-6
D_conv = 1.0e-3
###Output
_____no_output_____
###Markdown
Static quantities like the ERI tensor, core Hamiltonian, and orthogonalization matrix have exactly the same form in UHF as in RHF. Unlike in RHF, however, we will need the number of $\alpha$ and $\beta$ electrons. Fortunately, both these values are available through querying the Wavefunction object. In the cell below, generate these static objects and compute each of the following:- Number of basis functions, `nbf`- Number of alpha electrons, `nalpha`- Number of beta electrons, `nbeta`- Number of doubly occupied orbitals, `ndocc` (Hint: In UHF, there can be unpaired electrons!)
###Code
# ==> Compute static 1e- and 2e- quantities with Psi4 <==
# Class instantiation
wfn = psi4.core.Wavefunction.build(mol, psi4.core.get_global_option('basis'))
mints = psi4.core.MintsHelper(wfn.basisset())
# Overlap matrix
S = np.asarray(mints.ao_overlap())
# Number of basis Functions, alpha & beta orbitals, and # doubly occupied orbitals
nbf = wfn.nso()
nalpha = wfn.nalpha()
nbeta = wfn.nbeta()
ndocc = min(nalpha, nbeta)
print('Number of basis functions: %d' % (nbf))
print('Number of singly occupied orbitals: %d' % (abs(nalpha - nbeta)))
print('Number of doubly occupied orbitals: %d' % (ndocc))
# Memory check for ERI tensor
I_size = (nbf**4) * 8.e-9
print('\nSize of the ERI tensor will be {:4.2f} GB.'.format(I_size))
memory_footprint = I_size * 1.5
if I_size > numpy_memory:
psi4.core.clean()
raise Exception("Estimated memory utilization (%4.2f GB) exceeds allotted memory \
limit of %4.2f GB." % (memory_footprint, numpy_memory))
# Build ERI Tensor
I = np.asarray(mints.ao_eri())
# Build core Hamiltonian
T = np.asarray(mints.ao_kinetic())
V = np.asarray(mints.ao_potential())
H = T + V
# Construct AO orthogonalization matrix A
A = mints.ao_overlap()
A.power(-0.5, 1.e-16)
A = np.asarray(A)
###Output
Number of basis functions: 24
Number of singly occupied orbitals: 0
Number of doubly occupied orbitals: 5
Size of the ERI tensor will be 0.00 GB.
###Markdown
Unlike the static quantities above, the CORE guess in UHF is slightly different than in RHF. Since the $\alpha$ and $\beta$ electrons do not share spatial orbitals, we must construct a guess for *each* of the $\alpha$ and $\beta$ orbitals and densities. In the cell below, using the function `diag_F()`, construct the CORE guesses and compute the nuclear repulsion energy:(Hint: The number of $\alpha$ orbitals is the same as the number of $\alpha$ electrons!)
###Code
# ==> Build alpha & beta CORE guess <==
Ca, Da = diag_F(H, nalpha)
Cb, Db = diag_F(H, nbeta)
# Get nuclear repulsion energy
E_nuc = mol.nuclear_repulsion_energy()
###Output
_____no_output_____
###Markdown
We are almost ready to perform our SCF iterations; beforehand, however, we must initiate variables for the current & previous SCF energies, and the lists to hold previous residual vectors and trial vectors for the DIIS procedure. Since, in UHF, there are Fock matrices ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$ for both $\alpha$ and $\beta$ orbitals, we must apply DIIS to each of these matrices separately. In the cell below, define empty lists to hold previous Fock matrices and residual vectors for both $\alpha$ and $\beta$ orbitals:
###Code
# ==> Pre-Iteration Setup <==
# SCF & Previous Energy
SCF_E = 0.0
E_old = 0.0
###Output
_____no_output_____
###Markdown
We are now ready to write the SCF iterations. The algorithm for UHF-SCF iteration, with DIIS convergence accelleration, is: Algorithm 2: DIIS within UHF-SCF Iteration1. Build ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$, append to trial vector lists2. Compute the DIIS residual for $\alpha$ and $\beta$, append to residual vector lists3. Compute UHF energy4. Convergence check - If average of RMSD of $\alpha$ and $\beta$ residual sufficiently small, and - If change in UHF energy sufficiently small, break5. DIIS extrapolation of ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$ to form new solution vector6. Compute new ${\alpha}$ and ${\beta}$ orbital & density guessesIn the cell below, write the UHF-SCF iteration according to Algorithm 2:(Hint: Use your functions `diis_xtrap()` and `diag_F` for Algorithm 2 steps 5 & 6, respectively)
###Code
# Trial & Residual Vector Lists -- one each for alpha & beta
F_list_a = []
F_list_b = []
R_list_a = []
R_list_b = []
# ==> UHF-SCF Iterations <==
print('==> Starting SCF Iterations <==\n')
# Begin Iterations
for scf_iter in range(MAXITER):
# Build Fa & Fb matrices
Ja = np.einsum('pqrs,rs->pq', I, Da)
Jb = np.einsum('pqrs,rs->pq', I, Db)
Ka = np.einsum('prqs,rs->pq', I, Da)
Kb = np.einsum('prqs,rs->pq', I, Db)
Fa = H + (Ja + Jb) - Ka
Fb = H + (Ja + Jb) - Kb
# Compute DIIS residual for Fa & Fb
diis_r_a = A.dot(Fa.dot(Da).dot(S) - S.dot(Da).dot(Fa)).dot(A)
diis_r_b = A.dot(Fb.dot(Db).dot(S) - S.dot(Db).dot(Fb)).dot(A)
# Append trial & residual vectors to lists
F_list_a.append(Fa)
F_list_b.append(Fb)
R_list_a.append(diis_r_a)
R_list_b.append(diis_r_b)
# Compute UHF Energy
SCF_E = np.einsum('pq,pq->', (Da + Db), H)
SCF_E += np.einsum('pq,pq->', Da, Fa)
SCF_E += np.einsum('pq,pq->', Db, Fb)
SCF_E *= 0.5
SCF_E += E_nuc
dE = SCF_E - E_old
dRMS = 0.5 * (np.mean(diis_r_a**2)**0.5 + np.mean(diis_r_b**2)**0.5)
print('SCF Iteration %3d: Energy = %4.16f dE = % 1.5E dRMS = %1.5E' % (scf_iter, SCF_E, dE, dRMS))
# Convergence Check
if (abs(dE) < E_conv) and (dRMS < D_conv):
break
E_old = SCF_E
# DIIS Extrapolation
if scf_iter >= 2:
Fa = diis_xtrap(F_list_a, R_list_a)
Fb = diis_xtrap(F_list_b, R_list_b)
# Compute new orbital guess
Ca, Da = diag_F(Fa, nalpha)
Cb, Db = diag_F(Fb, nbeta)
# MAXITER exceeded?
if (scf_iter == MAXITER):
psi4.core.clean()
raise Exception("Maximum number of SCF iterations exceeded.")
# Post iterations
print('\nSCF converged.')
print('Final UHF Energy: %.8f [Eh]' % SCF_E)
###Output
==> Starting SCF Iterations <==
SCF Iteration 0: Energy = -74.1207806468836452 dE = 0.00000E+00 dRMS = 8.64677E-02
SCF Iteration 1: Energy = -74.8671819457688485 dE = -7.46401E-01 dRMS = 6.52840E-02
SCF Iteration 2: Energy = -75.4149087803903342 dE = -5.47727E-01 dRMS = 5.21690E-02
SCF Iteration 3: Energy = -75.9800488561561309 dE = -5.65140E-01 dRMS = 6.34267E-03
SCF Iteration 4: Energy = -75.9894383301614340 dE = -9.38947E-03 dRMS = 5.45826E-04
SCF Iteration 5: Energy = -75.9897683674259383 dE = -3.30037E-04 dRMS = 1.70671E-04
SCF Iteration 6: Energy = -75.9897948623176376 dE = -2.64949E-05 dRMS = 4.28126E-05
SCF Iteration 7: Energy = -75.9897957712875609 dE = -9.08970E-07 dRMS = 5.40285E-06
SCF converged.
Final UHF Energy: -75.98979577 [Eh]
###Markdown
Congratulations! You've written your very own Unrestricted Hartree-Fock program with DIIS convergence accelleration! Finally, let's check your final UHF energy against Psi4:
###Code
# Compare to Psi4
SCF_E_psi = psi4.energy('SCF')
psi4.driver.p4util.compare_values(SCF_E_psi, SCF_E, 6, 'SCF Energy')
###Output
SCF Energy........................................................PASSED
###Markdown
Unrestricted Open-Shell Hartree-FockIn the first two tutorials in this module, we wrote programs which implement a closed-shell formulation of Hartree-Fock theory using restricted orbitals, aptly named Restricted Hartree-Fock (RHF). In this tutorial, we will abandon strictly closed-shell systems and the notion of restricted orbitals, in favor of a more general theory known as Unrestricted Hartree-Fock (UHF) which can accommodate more diverse molecules. In UHF, the orbitals occupied by spin up ($\alpha$) electrons and those occupied by spin down ($\beta$) electrons no longer have the same spatial component, e.g., $$\chi_i({\bf x}) = \begin{cases}\psi^{\alpha}_j({\bf r})\alpha(\omega) \\ \psi^{\beta}_j({\bf r})\beta(\omega)\end{cases},$$meaning that they will not have the same orbital energy. This relaxation of orbital constraints allows for more variational flexibility, which leads to UHF always being able to find a lower total energy solution than RHF. I. Theoretical OverviewIn UHF, we seek to solve the coupled equations\begin{align}{\bf F}^{\alpha}{\bf C}^{\alpha} &= {\bf SC}^{\alpha}{\bf\epsilon}^{\alpha} \\{\bf F}^{\beta}{\bf C}^{\beta} &= {\bf SC}^{\beta}{\bf\epsilon}^{\beta},\end{align}which are the unrestricted generalizations of the restricted Roothan equations, called the Pople-Nesbet-Berthier equations. Here, the one-electron Fock matrices are given by\begin{align}F_{\mu\nu}^{\alpha} &= H_{\mu\nu} + (\mu\,\nu\mid\lambda\,\sigma)[D_{\lambda\sigma}^{\alpha} + D_{\lambda\sigma}^{\beta}] - (\mu\,\lambda\,\mid\nu\,\sigma)D_{\lambda\sigma}^{\beta}\\F_{\mu\nu}^{\beta} &= H_{\mu\nu} + (\mu\,\nu\mid\,\lambda\,\sigma)[D_{\lambda\sigma}^{\alpha} + D_{\lambda\sigma}^{\beta}] - (\mu\,\lambda\,\mid\nu\,\sigma)D_{\lambda\sigma}^{\alpha},\end{align}where the density matrices $D_{\lambda\sigma}^{\alpha}$ and $D_{\lambda\sigma}^{\beta}$ are given by\begin{align}D_{\lambda\sigma}^{\alpha} &= C_{\sigma i}^{\alpha}C_{\lambda i}^{\alpha}\\D_{\lambda\sigma}^{\beta} &= C_{\sigma i}^{\beta}C_{\lambda i}^{\beta}.\end{align}Unlike for RHF, the orbital coefficient matrices ${\bf C}^{\alpha}$ and ${\bf C}^{\beta}$ are of dimension $M\times N^{\alpha}$ and $M\times N^{\beta}$, where $M$ is the number of AO basis functions and $N^{\alpha}$ ($N^{\beta}$) is the number of $\alpha$ ($\beta$) electrons. The total UHF energy is given by\begin{align}E^{\rm UHF}_{\rm total} &= E^{\rm UHF}_{\rm elec} + E^{\rm BO}_{\rm nuc},\;\;{\rm with}\\E^{\rm UHF}_{\rm elec} &= \frac{1}{2}[({\bf D}^{\alpha} + {\bf D}^{\beta}){\bf H} + {\bf D}^{\alpha}{\bf F}^{\alpha} + {\bf D}^{\beta}{\bf F}^{\beta}].\end{align} II. ImplementationIn any SCF program, there will be several common elements which can be abstracted from the program itself into separate modules, classes, or functions to 'clean up' the code that will need to be written explicitly; examples of this concept can be seen throughout the Psi4Julia reference implementations. For the purposes of this tutorial, we can achieve some degree of code cleanup without sacrificing readabilitiy and clarity by focusing on abstracting only the parts of the code which are both - Lengthy subroutines, and - Used repeatedly. In our UHF program, let's use what we've learned in the last tutorial by also implementing DIIS convergence accelleration for our SCF iterations. With this in mind, two subroutines in particular would benefit from abstraction are1. Orthogonalize & diagonalize Fock matrix2. Extrapolate previous trial vectors for new DIIS solution vectorBefore we start writing our UHF program, let's try to write functions which can perform the above tasks so that we can use them in our implementation of UHF. Recall that defining functions in Julia has the following syntax:~~~juliafunction function_name(args; kwargs) function block return_valuesend~~~A thorough discussion of defining functions in Julia can be found [here](https://docs.julialang.org/en/v1/manual/functions/index.html "Go to Julia docs"). First, let's write a function which can diagonalize the Fock matrix and return the orbital coefficient matrix **C** and the density matrix **D**. From our RHF tutorial, this subroutine is executed with:~~~juliaF_p = A * F * Ae, C_p = eigen(Hermitian(F_p))C = A * C_pC_occ = C[:, 1:ndocc]D = C_occ * C_occ'~~~Examining this code block, there are three quantities which must be specified beforehand:- Fock matrix, **F**- Orthogonalization matrix, ${\bf A} = {\bf S}^{-1/2}$- Number of doubly occupied orbitals, `ndocc`However, since the orthogonalization matrix **A** is a static quantity (only built once, then left alone) we may choose to leave **A** as a *global* quantity, instead of an argument to our function. In the cell below, using the code snippet given above, write a function `diag_F()` which takes **F** and the number of orbitals `norb` as arguments, and returns **C** and **D**:
###Code
# ==> Define function to diagonalize F <==
function diag_F(F, norb, A)
F_p = A * F * A
e, C_p = eigen(Hermitian(F_p))
C = A * C_p
C_occ = C[:, 1:norb]
D = C_occ * C_occ'
C, D
end
###Output
_____no_output_____
###Markdown
Next, let's write a function to perform DIIS extrapolation and generate a new solution vector. Recall that the DIIS-accellerated SCF algorithm is: Algorithm 1: DIIS within a generic SCF Iteration1. Compute **F**, append to list of previous trial vectors2. Compute AO orbital gradient **r**, append to list of previous residual vectors3. Compute RHF energy3. Check convergence criteria - If RMSD of **r** sufficiently small, and - If change in SCF energy sufficiently small, break4. Build **B** matrix from previous AO gradient vectors5. Solve Pulay equation for coefficients $\{c_i\}$6. Compute DIIS solution vector **F_DIIS** from $\{c_i\}$ and previous trial vectors7. Compute new orbital guess with **F_DIIS**In our function, we will perform steps 4-6 of the above algorithm. What information will we need to provide our function in order to do so? To build **B** (step 4 above) in the previous tutorial, we used:~~~julia Build B matrixB_dim = length(F_list) + 1B = zeros(B_dim, B_dim)B[end, :] .= -1B[: , end] .= -1B[end, end] = 0for i in eachindex(F_list), j in eachindex(F_list) B[i, j] = dot(DIIS_RESID[i], DIIS_RESID[j])end~~~Here, we see that we must have all previous DIIS residual vectors (`DIIS_RESID`), as well as knowledge about how many previous trial vectors there are (for the dimension of **B**). To solve the Pulay equation (step 5 above):~~~julia Build RHS of Pulay equation rhs = zeros(B_dim)rhs[end] = -1 Solve Pulay equation for c_i's with NumPycoeff = B \ rhs~~~For this step, we only need the dimension of **B** (which we computed in step 4 above) and a Julia routine, so this step doesn't require any additional arguments. Finally, to build the DIIS Fock matrix (step 6):~~~julia Build DIIS Fock matrixF = zeros(size(F_list[0]))for x in 1:length(coeff) - 1 F += coeff[x] * F_list[x]end~~~Clearly, for this step, we need to know all the previous trial vectors (`F_list`) and the coefficients we generated in the previous step. In the cell below, write a funciton `diis_xtrap()` according to Algorithm 1 steps 4-6, using the above code snippets, which takes a list of previous trial vectors `F_list` and residual vectors `DIIS_RESID` as arguments and returns the new DIIS solution vector `F_DIIS`:
###Code
# ==> Build DIIS Extrapolation Function <==
function diis_xtrap(F_list, DIIS_RESID)
# Build B matrix
B_dim = length(F_list) + 1
B = zeros(B_dim, B_dim)
B[end, :] .= -1
B[: , end] .= -1
B[end, end] = 0
for i in eachindex(F_list), j in eachindex(F_list)
B[i, j] = dot(DIIS_RESID[i], DIIS_RESID[j])
end
# Build RHS of Pulay equation
rhs = zeros(B_dim)
rhs[end] = -1
# Solve Pulay equation for c_i's with Julia
coeff = B \ rhs
# Build DIIS Fock matrix
F = zeros(size(F_list[1]))
for i in 1:length(coeff) - 1
F += coeff[i] * F_list[i]
end
F
end
###Output
_____no_output_____
###Markdown
We are now ready to begin writing our UHF program! Let's begin by importing Psi4 , NumPy, TensorOperations, LinearAlgebra, and defining our molecule & basic options:
###Code
# ==> Import Psi4 & NumPy <==
using PyCall: pyimport
psi4 = pyimport("psi4")
np = pyimport("numpy") # used only to cast to Psi4 arrays
using TensorOperations: @tensor
using LinearAlgebra: Diagonal, Hermitian, eigen, tr, norm, dot
using Printf: @printf
# ==> Set Basic Psi4 Options <==
# Memory specification
psi4.set_memory(Int(5e8))
numpy_memory = 2
# Set output file
psi4.core.set_output_file("output.dat", false)
# Define Physicist's water -- don't forget C1 symmetry!
mol = psi4.geometry("""
O
H 1 1.1
H 1 1.1 2 104
symmetry c1
""")
# Set computation options
psi4.set_options(Dict("basis" => "cc-pvdz",
"scf_type" => "pk",
"e_convergence" => 1e-8,
"guess" => "core",
"reference" => "uhf"))
###Output
_____no_output_____
###Markdown
You may notice that in the above `psi4.set_options()` block, there are two additional options -- namely, `'guess': 'core'` and `'reference': 'uhf'`. These options make sure that when we ultimately check our program against Psi4, the options Psi4 uses are identical to our implementation. Next, let's define the options for our UHF program; we can borrow these options from our RHF implementation with DIIS accelleration that we completed in our last tutorial.
###Code
# ==> Set default program options <==
# Maximum SCF iterations
MAXITER = 40
# Energy convergence criterion
E_conv = 1.0e-6
D_conv = 1.0e-3
###Output
_____no_output_____
###Markdown
Static quantities like the ERI tensor, core Hamiltonian, and orthogonalization matrix have exactly the same form in UHF as in RHF. Unlike in RHF, however, we will need the number of $\alpha$ and $\beta$ electrons. Fortunately, both these values are available through querying the Wavefunction object. In the cell below, generate these static objects and compute each of the following:- Number of basis functions, `nbf`- Number of alpha electrons, `nalpha`- Number of beta electrons, `nbeta`- Number of doubly occupied orbitals, `ndocc` (Hint: In UHF, there can be unpaired electrons!)
###Code
# ==> Compute static 1e- and 2e- quantities with Psi4 <==
# Class instantiation
wfn = psi4.core.Wavefunction.build(mol, psi4.core.get_global_option("basis"))
mints = psi4.core.MintsHelper(wfn.basisset())
# Overlap matrix
S = np.asarray(mints.ao_overlap()) # we only need a copy
# Number of basis Functions, alpha & beta orbitals, and # doubly occupied orbitals
nbf = wfn.nso()
nalpha = wfn.nalpha()
nbeta = wfn.nbeta()
ndocc = min(nalpha, nbeta)
println("Number of basis functions: ", nbf)
println("Number of singly occupied orbitals: ", abs(nalpha-nbeta))
println("Number of doubly occupied orbitals: ", ndocc)
# Memory check for ERI tensor
I_size = nbf^4 * 8.e-9
println("\nSize of the ERI tensor will be $I_size GB.")
memory_footprint = I_size * 1.5
if I_size > numpy_memory
psi4.core.clean()
throw(OutOfMemoryError("Estimated memory utilization ($memory_footprint GB) exceeds " *
"allotted memory limit of $numpy_memory GB."))
end
# Build ERI Tensor
I = np.asarray(mints.ao_eri()) # we only need a copy
# Build core Hamiltonian
T = np.asarray(mints.ao_kinetic()) # we only need a copy
V = np.asarray(mints.ao_potential()) # we only need a copy
H = T + V;
# Construct AO orthogonalization matrix A
A = mints.ao_overlap()
A.power(-0.5, 1.e-16) # ≈ Julia's A^(-0.5) after psi4view()
A = np.asarray(A);
###Output
_____no_output_____
###Markdown
Unlike the static quantities above, the CORE guess in UHF is slightly different than in RHF. Since the $\alpha$ and $\beta$ electrons do not share spatial orbitals, we must construct a guess for *each* of the $\alpha$ and $\beta$ orbitals and densities. In the cell below, using the function `diag_F()`, construct the CORE guesses and compute the nuclear repulsion energy:(Hint: The number of $\alpha$ orbitals is the same as the number of $\alpha$ electrons!)
###Code
# ==> Build alpha & beta CORE guess <==
Ca, Da = diag_F(H, nalpha, A)
Cb, Db = diag_F(H, nbeta, A)
# Get nuclear repulsion energy
E_nuc = mol.nuclear_repulsion_energy()
###Output
_____no_output_____
###Markdown
We are almost ready to perform our SCF iterations; beforehand, however, we must initiate variables for the current & previous SCF energies, and the lists to hold previous residual vectors and trial vectors for the DIIS procedure. Since, in UHF, there are Fock matrices ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$ for both $\alpha$ and $\beta$ orbitals, we must apply DIIS to each of these matrices separately. In the cell below, define empty lists to hold previous Fock matrices and residual vectors for both $\alpha$ and $\beta$ orbitals:
###Code
# ==> Pre-Iteration Setup <==
# SCF & Previous Energy
SCF_E = 0.0
E_old = 0.0
###Output
_____no_output_____
###Markdown
We are now ready to write the SCF iterations. The algorithm for UHF-SCF iteration, with DIIS convergence accelleration, is: Algorithm 2: DIIS within UHF-SCF Iteration1. Build ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$, append to trial vector lists2. Compute the DIIS residual for $\alpha$ and $\beta$, append to residual vector lists3. Compute UHF energy4. Convergence check - If average of RMSD of $\alpha$ and $\beta$ residual sufficiently small, and - If change in UHF energy sufficiently small, break5. DIIS extrapolation of ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$ to form new solution vector6. Compute new ${\alpha}$ and ${\beta}$ orbital & density guessesIn the cell below, write the UHF-SCF iteration according to Algorithm 2:(Hint: Use your functions `diis_xtrap()` and `diag_F` for Algorithm 2 steps 5 & 6, respectively)
###Code
SCF_E = let SCF_E = SCF_E, E_old = E_old, Da = Da, Db = Db, A = A, I = I, H = H, S = S
# Trial & Residual Vector Lists -- one each for α & β
F_list_a = []
F_list_b = []
R_list_a = []
R_list_b = []
# ==> UHF-SCF Iterations <==
println("==> Starting SCF Iterations <==")
# Begin Iterations
for scf_iter in 1:MAXITER
# Build Fa & Fb matrices
@tensor Ja[p,q] := I[p,q,r,s] * Da[r,s]
@tensor Jb[p,q] := I[p,q,r,s] * Db[r,s]
@tensor Ka[p,q] := I[p,r,q,s] * Da[r,s]
@tensor Kb[p,q] := I[p,r,q,s] * Db[r,s]
Fa = H + (Ja + Jb) - Ka
Fb = H + (Ja + Jb) - Kb
# Compute DIIS residual for Fa & Fb
diis_r_a = A * (Fa * Da * S - S * Da * Fa) * A
diis_r_b = A * (Fb * Db * S - S * Db * Fb) * A
# Append trial & residual vectors to lists
push!(F_list_a, Fa)
push!(F_list_b, Fb)
push!(R_list_a, diis_r_a)
push!(R_list_b, diis_r_b)
# Compute UHF Energy
SCF_E = 0.5*tr( H*(Da + Db) + Fa*Da + Fb*Db) + E_nuc
dE = SCF_E - E_old
dRMS = 0.5(norm(diis_r_a) + norm(diis_r_b))
@printf("SCF Iteration %3d: Energy = %4.16f dE = %1.5e dRMS = %1.5e \n",
scf_iter, SCF_E, SCF_E - E_old, dRMS)
# Convergence Check
if abs(dE) < E_conv && dRMS < D_conv
break
end
E_old = SCF_E
# DIIS Extrapolation
if scf_iter >= 2
Fa = diis_xtrap(F_list_a, R_list_a)
Fb = diis_xtrap(F_list_b, R_list_b)
end
# Compute new orbital guess
Ca, Da = diag_F(Fa, nalpha, A)
Cb, Db = diag_F(Fb, nbeta, A)
# MAXITER exceeded?
if scf_iter == MAXITER
psi4.core.clean()
throw(MethodError("Maximum number of SCF iterations exceeded."))
end
end
SCF_E
end
# Post iterations
println("\nSCF converged.")
println("Final RHF Energy: $SCF_E [Eh]")
println()
###Output
_____no_output_____
###Markdown
Congratulations! You've written your very own Unrestricted Hartree-Fock program with DIIS convergence accelleration! Finally, let's check your final UHF energy against Psi4:
###Code
# Compare to Psi4
SCF_E_psi = psi4.energy("SCF")
SCF_E
psi4.compare_values(SCF_E_psi, SCF_E, 6, "SCF Energy")
###Output
_____no_output_____
###Markdown
Unrestricted Open-Shell Hartree-FockIn the first two tutorials in this module, we wrote programs which implement a closed-shell formulation of Hartree-Fock theory using restricted orbitals, aptly named Restricted Hartree-Fock (RHF). In this tutorial, we will abandon strictly closed-shell systems and the notion of restricted orbitals, in favor of a more general theory known as Unrestricted Hartree-Fock (UHF) which can accommodate more diverse molecules. In UHF, the orbitals occupied by spin up ($\alpha$) electrons and those occupied by spin down ($\beta$) electrons no longer have the same spatial component, e.g., $$\chi_i({\bf x}) = \begin{cases}\psi^{\alpha}_j({\bf r})\alpha(\omega) \\ \psi^{\beta}_j({\bf r})\beta(\omega)\end{cases},$$meaning that they will not have the same orbital energy. This relaxation of orbital constraints allows for more variational flexibility, which leads to UHF always being able to find a lower total energy solution than RHF. I. Theoretical OverviewIn UHF, we seek to solve the coupled equations\begin{align}{\bf F}^{\alpha}{\bf C}^{\alpha} &= {\bf SC}^{\alpha}{\bf\epsilon}^{\alpha} \\{\bf F}^{\beta}{\bf C}^{\beta} &= {\bf SC}^{\beta}{\bf\epsilon}^{\beta},\end{align}which are the unrestricted generalizations of the restricted Roothan equations, called the Pople-Nesbitt equations. Here, the one-electron Fock matrices are given by\begin{align}F_{\mu\nu}^{\alpha} &= H_{\mu\nu} + (\mu\,\nu\mid\lambda\,\sigma)[D_{\lambda\sigma}^{\alpha} + D_{\lambda\sigma}^{\beta}] - (\mu\,\lambda\,\mid\nu\,\sigma)D_{\lambda\sigma}^{\beta}\\F_{\mu\nu}^{\beta} &= H_{\mu\nu} + (\mu\,\nu\mid\,\lambda\,\sigma)[D_{\lambda\sigma}^{\alpha} + D_{\lambda\sigma}^{\beta}] - (\mu\,\lambda\,\mid\nu\,\sigma)D_{\lambda\sigma}^{\alpha},\end{align}where the density matrices $D_{\lambda\sigma}^{\alpha}$ and $D_{\lambda\sigma}^{\beta}$ are given by\begin{align}D_{\lambda\sigma}^{\alpha} &= C_{\sigma i}^{\alpha}C_{\lambda i}^{\alpha}\\D_{\lambda\sigma}^{\beta} &= C_{\sigma i}^{\beta}C_{\lambda i}^{\beta}.\end{align}Unlike for RHF, the orbital coefficient matrices ${\bf C}^{\alpha}$ and ${\bf C}^{\beta}$ are of dimension $M\times N^{\alpha}$ and $M\times N^{\beta}$, where $M$ is the number of AO basis functions and $N^{\alpha}$ ($N^{\beta}$) is the number of $\alpha$ ($\beta$) electrons. The total UHF energy is given by\begin{align}E^{\rm UHF}_{\rm total} &= E^{\rm UHF}_{\rm elec} + E^{\rm BO}_{\rm nuc},\;\;{\rm with}\\E^{\rm UHF}_{\rm elec} &= \frac{1}{2}[({\bf D}^{\alpha} + {\bf D}^{\beta}){\bf H} + {\bf D}^{\alpha}{\bf F}^{\alpha} + {\bf D}^{\beta}{\bf F}^{\beta}].\end{align} II. ImplementationIn any SCF program, there will be several common elements which can be abstracted from the program itself into separate modules, classes, or functions to 'clean up' the code that will need to be written explicitly; examples of this concept can be seen throughout the Psi4NumPy reference implementations. For the purposes of this tutorial, we can achieve some degree of code cleanup without sacrificing readabilitiy and clarity by focusing on abstracting only the parts of the code which are both - Lengthy subroutines, and - Used repeatedly. In our UHF program, let's use what we've learned in the last tutorial by also implementing DIIS convergence accelleration for our SCF iterations. With this in mind, two subroutines in particular would benefit from abstraction are1. Orthogonalize & diagonalize Fock matrix2. Extrapolate previous trial vectors for new DIIS solution vectorBefore we start writing our UHF program, let's try to write functions which can perform the above tasks so that we can use them in our implementation of UHF. Recall that defining functions in Python has the following syntax:~~~pythondef function_name(*args **kwargs): function block return return_values~~~A thorough discussion of defining functions in Python can be found [here](https://docs.python.org/2/tutorial/controlflow.htmldefining-functions "Go to Python docs"). First, let's write a function which can diagonalize the Fock matrix and return the orbital coefficient matrix **C** and the density matrix **D**. From our RHF tutorial, this subroutine is executed with:~~~pythonF_p = A.dot(F).dot(A)e, C_p = np.linalg.eigh(F_p)C = A.dot(C_p)C_occ = C[:, :ndocc]D = np.einsum('pi,qi->pq', C_occ, C_occ, optimize=True)~~~Examining this code block, there are three quantities which must be specified beforehand:- Fock matrix, **F**- Orthogonalization matrix, ${\bf A} = {\bf S}^{-1/2}$- Number of doubly occupied orbitals, `ndocc`However, since the orthogonalization matrix **A** is a static quantity (only built once, then left alone) we may choose to leave **A** as a *global* quantity, instead of an argument to our function. In the cell below, using the code snippet given above, write a function `diag_F()` which takes **F** and the number of orbitals `norb` as arguments, and returns **C** and **D**:
###Code
# ==> Define function to diagonalize F <==
def diag_F(F, norb):
F_p = A.dot(F).dot(A)
e, C_p = np.linalg.eigh(F_p)
C = A.dot(C_p)
C_occ = C[:, :norb]
D = np.einsum('pi,qi->pq', C_occ, C_occ, optimize=True)
return (C, D)
###Output
_____no_output_____
###Markdown
Next, let's write a function to perform DIIS extrapolation and generate a new solution vector. Recall that the DIIS-accellerated SCF algorithm is: Algorithm 1: DIIS within a generic SCF Iteration1. Compute **F**, append to list of previous trial vectors2. Compute AO orbital gradient **r**, append to list of previous residual vectors3. Compute RHF energy3. Check convergence criteria - If RMSD of **r** sufficiently small, and - If change in SCF energy sufficiently small, break4. Build **B** matrix from previous AO gradient vectors5. Solve Pulay equation for coefficients $\{c_i\}$6. Compute DIIS solution vector **F_DIIS** from $\{c_i\}$ and previous trial vectors7. Compute new orbital guess with **F_DIIS**In our function, we will perform steps 4-6 of the above algorithm. What information will we need to provide our function in order to do so? To build **B** (step 4 above) in the previous tutorial, we used:~~~python Build B matrixB_dim = len(F_list) + 1B = np.empty((B_dim, B_dim))B[-1, :] = -1B[:, -1] = -1B[-1, -1] = 0for i in xrange(len(F_list)): for j in xrange(len(F_list)): B[i, j] = np.einsum('ij,ij->', DIIS_RESID[i], DIIS_RESID[j], optimize=True)~~~Here, we see that we must have all previous DIIS residual vectors (`DIIS_RESID`), as well as knowledge about how many previous trial vectors there are (for the dimension of **B**). To solve the Pulay equation (step 5 above):~~~python Build RHS of Pulay equation rhs = np.zeros((B_dim))rhs[-1] = -1 Solve Pulay equation for c_i's with NumPycoeff = np.linalg.solve(B, rhs)~~~For this step, we only need the dimension of **B** (which we computed in step 4 above) and a NumPy routine, so this step doesn't require any additional arguments. Finally, to build the DIIS Fock matrix (step 6):~~~python Build DIIS Fock matrixF = np.zeros_like(F_list[0])for x in xrange(coeff.shape[0] - 1): F += coeff[x] * F_list[x]~~~Clearly, for this step, we need to know all the previous trial vectors (`F_list`) and the coefficients we generated in the previous step. In the cell below, write a funciton `diis_xtrap()` according to Algorithm 1 steps 4-6, using the above code snippets, which takes a list of previous trial vectors `F_list` and residual vectors `DIIS_RESID` as arguments and returns the new DIIS solution vector `F_DIIS`:
###Code
# ==> Build DIIS Extrapolation Function <==
def diis_xtrap(F_list, DIIS_RESID):
# Build B matrix
B_dim = len(F_list) + 1
B = np.empty((B_dim, B_dim))
B[-1, :] = -1
B[:, -1] = -1
B[-1, -1] = 0
for i in range(len(F_list)):
for j in range(len(F_list)):
B[i, j] = np.einsum('ij,ij->', DIIS_RESID[i], DIIS_RESID[j], optimize=True)
# Build RHS of Pulay equation
rhs = np.zeros((B_dim))
rhs[-1] = -1
# Solve Pulay equation for c_i's with NumPy
coeff = np.linalg.solve(B, rhs)
# Build DIIS Fock matrix
F_DIIS = np.zeros_like(F_list[0])
for x in range(coeff.shape[0] - 1):
F_DIIS += coeff[x] * F_list[x]
return F_DIIS
###Output
_____no_output_____
###Markdown
We are now ready to begin writing our UHF program! Let's begin by importing Psi4 and NumPy, and defining our molecule & basic options:
###Code
# ==> Import Psi4 & NumPy <==
import psi4
import numpy as np
# ==> Set Basic Psi4 Options <==
# Memory specification
psi4.set_memory(int(5e8))
numpy_memory = 2
# Set output file
psi4.core.set_output_file('output.dat', False)
# Define Physicist's water -- don't forget C1 symmetry!
mol = psi4.geometry("""
O
H 1 1.1
H 1 1.1 2 104
symmetry c1
""")
# Set computation options
psi4.set_options({'guess': 'core',
'basis': 'cc-pvdz',
'scf_type': 'pk',
'e_convergence': 1e-8,
'reference': 'uhf'})
###Output
_____no_output_____
###Markdown
You may notice that in the above `psi4.set_options()` block, there are two additional options -- namely, `'guess': 'core'` and `'reference': 'uhf'`. These options make sure that when we ultimately check our program against Psi4, the options Psi4 uses are identical to our implementation. Next, let's define the options for our UHF program; we can borrow these options from our RHF implementation with DIIS accelleration that we completed in our last tutorial.
###Code
# ==> Set default program options <==
# Maximum SCF iterations
MAXITER = 40
# Energy convergence criterion
E_conv = 1.0e-6
D_conv = 1.0e-3
###Output
_____no_output_____
###Markdown
Static quantities like the ERI tensor, core Hamiltonian, and orthogonalization matrix have exactly the same form in UHF as in RHF. Unlike in RHF, however, we will need the number of $\alpha$ and $\beta$ electrons. Fortunately, both these values are available through querying the Wavefunction object. In the cell below, generate these static objects and compute each of the following:- Number of basis functions, `nbf`- Number of alpha electrons, `nalpha`- Number of beta electrons, `nbeta`- Number of doubly occupied orbitals, `ndocc` (Hint: In UHF, there can be unpaired electrons!)
###Code
# ==> Compute static 1e- and 2e- quantities with Psi4 <==
# Class instantiation
wfn = psi4.core.Wavefunction.build(mol, psi4.core.get_global_option('basis'))
mints = psi4.core.MintsHelper(wfn.basisset())
# Overlap matrix
S = np.asarray(mints.ao_overlap())
# Number of basis Functions, alpha & beta orbitals, and # doubly occupied orbitals
nbf = wfn.nso()
nalpha = wfn.nalpha()
nbeta = wfn.nbeta()
ndocc = min(nalpha, nbeta)
print('Number of basis functions: %d' % (nbf))
print('Number of singly occupied orbitals: %d' % (abs(nalpha - nbeta)))
print('Number of doubly occupied orbitals: %d' % (ndocc))
# Memory check for ERI tensor
I_size = (nbf**4) * 8.e-9
print('\nSize of the ERI tensor will be {:4.2f} GB.'.format(I_size))
if I_size > numpy_memory:
psi4.core.clean()
raise Exception("Estimated memory utilization (%4.2f GB) exceeds allotted memory \
limit of %4.2f GB." % (I_size, numpy_memory))
# Build ERI Tensor
I = np.asarray(mints.ao_eri())
# Build core Hamiltonian
T = np.asarray(mints.ao_kinetic())
V = np.asarray(mints.ao_potential())
H = T + V
# Construct AO orthogonalization matrix A
A = mints.ao_overlap()
A.power(-0.5, 1.e-16)
A = np.asarray(A)
###Output
Number of basis functions: 24
Number of singly occupied orbitals: 0
Number of doubly occupied orbitals: 5
Size of the ERI tensor will be 0.00 GB.
###Markdown
Unlike the static quantities above, the CORE guess in UHF is slightly different than in RHF. Since the $\alpha$ and $\beta$ electrons do not share spatial orbitals, we must construct a guess for *each* of the $\alpha$ and $\beta$ orbitals and densities. In the cell below, using the function `diag_F()`, construct the CORE guesses and compute the nuclear repulsion energy:(Hint: The number of $\alpha$ orbitals is the same as the number of $\alpha$ electrons!)
###Code
# ==> Build alpha & beta CORE guess <==
Ca, Da = diag_F(H, nalpha)
Cb, Db = diag_F(H, nbeta)
# Get nuclear repulsion energy
E_nuc = mol.nuclear_repulsion_energy()
###Output
_____no_output_____
###Markdown
We are almost ready to perform our SCF iterations; beforehand, however, we must initiate variables for the current & previous SCF energies, and the lists to hold previous residual vectors and trial vectors for the DIIS procedure. Since, in UHF, there are Fock matrices ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$ for both $\alpha$ and $\beta$ orbitals, we must apply DIIS to each of these matrices separately. In the cell below, define empty lists to hold previous Fock matrices and residual vectors for both $\alpha$ and $\beta$ orbitals:
###Code
# ==> Pre-Iteration Setup <==
# SCF & Previous Energy
SCF_E = 0.0
E_old = 0.0
###Output
_____no_output_____
###Markdown
We are now ready to write the SCF iterations. The algorithm for UHF-SCF iteration, with DIIS convergence accelleration, is: Algorithm 2: DIIS within UHF-SCF Iteration1. Build ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$, append to trial vector lists2. Compute the DIIS residual for $\alpha$ and $\beta$, append to residual vector lists3. Compute UHF energy4. Convergence check - If average of RMSD of $\alpha$ and $\beta$ residual sufficiently small, and - If change in UHF energy sufficiently small, break5. DIIS extrapolation of ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$ to form new solution vector6. Compute new ${\alpha}$ and ${\beta}$ orbital & density guessesIn the cell below, write the UHF-SCF iteration according to Algorithm 2:(Hint: Use your functions `diis_xtrap()` and `diag_F` for Algorithm 2 steps 5 & 6, respectively)
###Code
# Trial & Residual Vector Lists -- one each for alpha & beta
F_list_a = []
F_list_b = []
R_list_a = []
R_list_b = []
# ==> UHF-SCF Iterations <==
print('==> Starting SCF Iterations <==\n')
# Begin Iterations
for scf_iter in range(1, MAXITER+1):
# Build Fa & Fb matrices
Ja = np.einsum('pqrs,rs->pq', I, Da, optimize=True)
Jb = np.einsum('pqrs,rs->pq', I, Db, optimize=True)
Ka = np.einsum('prqs,rs->pq', I, Da, optimize=True)
Kb = np.einsum('prqs,rs->pq', I, Db, optimize=True)
Fa = H + (Ja + Jb) - Ka
Fb = H + (Ja + Jb) - Kb
# Compute DIIS residual for Fa & Fb
diis_r_a = A.dot(Fa.dot(Da).dot(S) - S.dot(Da).dot(Fa)).dot(A)
diis_r_b = A.dot(Fb.dot(Db).dot(S) - S.dot(Db).dot(Fb)).dot(A)
# Append trial & residual vectors to lists
F_list_a.append(Fa)
F_list_b.append(Fb)
R_list_a.append(diis_r_a)
R_list_b.append(diis_r_b)
# Compute UHF Energy
SCF_E = np.einsum('pq,pq->', (Da + Db), H, optimize=True)
SCF_E += np.einsum('pq,pq->', Da, Fa, optimize=True)
SCF_E += np.einsum('pq,pq->', Db, Fb, optimize=True)
SCF_E *= 0.5
SCF_E += E_nuc
dE = SCF_E - E_old
dRMS = 0.5 * (np.mean(diis_r_a**2)**0.5 + np.mean(diis_r_b**2)**0.5)
print('SCF Iteration %3d: Energy = %4.16f dE = % 1.5E dRMS = %1.5E' % (scf_iter, SCF_E, dE, dRMS))
# Convergence Check
if (abs(dE) < E_conv) and (dRMS < D_conv):
break
E_old = SCF_E
# DIIS Extrapolation
if scf_iter >= 2:
Fa = diis_xtrap(F_list_a, R_list_a)
Fb = diis_xtrap(F_list_b, R_list_b)
# Compute new orbital guess
Ca, Da = diag_F(Fa, nalpha)
Cb, Db = diag_F(Fb, nbeta)
# MAXITER exceeded?
if (scf_iter == MAXITER):
psi4.core.clean()
raise Exception("Maximum number of SCF iterations exceeded.")
# Post iterations
print('\nSCF converged.')
print('Final UHF Energy: %.8f [Eh]' % SCF_E)
###Output
==> Starting SCF Iterations <==
SCF Iteration 0: Energy = -68.9800327333871337 dE = -6.89800E+01 dRMS = 1.16551E-01
SCF Iteration 1: Energy = -69.6472544393141675 dE = -6.67222E-01 dRMS = 1.07430E-01
SCF Iteration 2: Energy = -72.8403031079928667 dE = -3.19305E+00 dRMS = 1.03959E-01
SCF Iteration 3: Energy = -75.7279773794242033 dE = -2.88767E+00 dRMS = 3.28422E-02
SCF Iteration 4: Energy = -75.9858651566443655 dE = -2.57888E-01 dRMS = 4.05758E-03
SCF Iteration 5: Energy = -75.9894173631280410 dE = -3.55221E-03 dRMS = 1.14648E-03
SCF Iteration 6: Energy = -75.9897793050353130 dE = -3.61942E-04 dRMS = 1.84785E-04
SCF Iteration 7: Energy = -75.9897954286870174 dE = -1.61237E-05 dRMS = 2.57274E-05
SCF Iteration 8: Energy = -75.9897957793742762 dE = -3.50687E-07 dRMS = 3.67191E-06
SCF converged.
Final UHF Energy: -75.98979578 [Eh]
###Markdown
Congratulations! You've written your very own Unrestricted Hartree-Fock program with DIIS convergence accelleration! Finally, let's check your final UHF energy against Psi4:
###Code
# Compare to Psi4
SCF_E_psi = psi4.energy('SCF')
psi4.compare_values(SCF_E_psi, SCF_E, 6, 'SCF Energy')
###Output
SCF Energy........................................................PASSED
###Markdown
Unrestricted Open-Shell Hartree-FockIn the first two tutorials in this module, we wrote programs which implement a closed-shell formulation of Hartree-Fock theory using restricted orbitals, aptly named Restricted Hartree-Fock (RHF). In this tutorial, we will abandon strictly closed-shell systems and the notion of restricted orbitals, in favor of a more general theory known as Unrestricted Hartree-Fock (UHF) which can accommodate more diverse molecules. In UHF, the orbitals occupied by spin up ($\alpha$) electrons and those occupied by spin down ($\beta$) electrons no longer have the same spatial component, e.g., $$\chi_i({\bf x}) = \begin{cases}\psi^{\alpha}_j({\bf r})\alpha(\omega) \\ \psi^{\beta}_j({\bf r})\beta(\omega)\end{cases},$$meaning that they will not have the same orbital energy. This relaxation of orbital constraints allows for more variational flexibility, which leads to UHF always being able to find a lower total energy solution than RHF. I. Theoretical OverviewIn UHF, we seek to solve the coupled equations\begin{align}{\bf F}^{\alpha}{\bf C}^{\alpha} &= {\bf SC}^{\alpha}{\bf\epsilon}^{\alpha} \\{\bf F}^{\beta}{\bf C}^{\beta} &= {\bf SC}^{\beta}{\bf\epsilon}^{\beta},\end{align}which are the unrestricted generalizations of the restricted Roothan equations, called the Pople-Nesbitt equations. Here, the one-electron Fock matrices are given by\begin{align}F_{\mu\nu}^{\alpha} &= H_{\mu\nu} + (\mu\,\nu\mid\lambda\,\sigma)[D_{\lambda\sigma}^{\alpha} + D_{\lambda\sigma}^{\beta}] - (\mu\,\lambda\,\mid\nu\,\sigma)D_{\lambda\sigma}^{\beta}\\F_{\mu\nu}^{\beta} &= H_{\mu\nu} + (\mu\,\nu\mid\,\lambda\,\sigma)[D_{\lambda\sigma}^{\alpha} + D_{\lambda\sigma}^{\beta}] - (\mu\,\lambda\,\mid\nu\,\sigma)D_{\lambda\sigma}^{\alpha},\end{align}where the density matrices $D_{\lambda\sigma}^{\alpha}$ and $D_{\lambda\sigma}^{\beta}$ are given by\begin{align}D_{\lambda\sigma}^{\alpha} &= C_{\sigma i}^{\alpha}C_{\lambda i}^{\alpha}\\D_{\lambda\sigma}^{\beta} &= C_{\sigma i}^{\beta}C_{\lambda i}^{\beta}.\end{align}Unlike for RHF, the orbital coefficient matrices ${\bf C}^{\alpha}$ and ${\bf C}^{\beta}$ are of dimension $M\times N^{\alpha}$ and $M\times N^{\beta}$, where $M$ is the number of AO basis functions and $N^{\alpha}$ ($N^{\beta}$) is the number of $\alpha$ ($\beta$) electrons. The total UHF energy is given by\begin{align}E^{\rm UHF}_{\rm total} &= E^{\rm UHF}_{\rm elec} + E^{\rm BO}_{\rm nuc},\;\;{\rm with}\\E^{\rm UHF}_{\rm elec} &= \frac{1}{2}[({\bf D}^{\alpha} + {\bf D}^{\beta}){\bf H} + {\bf D}^{\alpha}{\bf F}^{\alpha} + {\bf D}^{\beta}{\bf F}^{\beta}].\end{align} II. ImplementationIn any SCF program, there will be several common elements which can be abstracted from the program itself into separate modules, classes, or functions to 'clean up' the code that will need to be written explicitly; examples of this concept can be seen throughout the Psi4NumPy reference implementations. For the purposes of this tutorial, we can achieve some degree of code cleanup without sacrificing readabilitiy and clarity by focusing on abstracting only the parts of the code which are both - Lengthy subroutines, and - Used repeatedly. In our UHF program, let's use what we've learned in the last tutorial by also implementing DIIS convergence accelleration for our SCF iterations. With this in mind, two subroutines in particular would benefit from abstraction are1. Orthogonalize & diagonalize Fock matrix2. Extrapolate previous trial vectors for new DIIS solution vectorBefore we start writing our UHF program, let's try to write functions which can perform the above tasks so that we can use them in our implementation of UHF. Recall that defining functions in Python has the following syntax:~~~pythondef function_name(*args **kwargs): function block return return_values~~~A thorough discussion of defining functions in Python can be found [here](https://docs.python.org/2/tutorial/controlflow.htmldefining-functions "Go to Python docs"). First, let's write a function which can diagonalize the Fock matrix and return the orbital coefficient matrix **C** and the density matrix **D**. From our RHF tutorial, this subroutine is executed with:~~~pythonF_p = A.dot(F).dot(A)e, C_p = np.linalg.eigh(F_p)C = A.dot(C_p)C_occ = C[:, :ndocc]D = np.einsum('pi,qi->pq', C_occ, C_occ)~~~Examining this code block, there are three quantities which must be specified beforehand:- Fock matrix, **F**- Orthogonalization matrix, ${\bf A} = {\bf S}^{-1/2}$- Number of doubly occupied orbitals, `ndocc`However, since the orthogonalization matrix **A** is a static quantity (only built once, then left alone) we may choose to leave **A** as a *global* quantity, instead of an argument to our function. In the cell below, using the code snippet given above, write a function `diag_F()` which takes **F** and the number of orbitals `norb` as arguments, and returns **C** and **D**:
###Code
# ==> Define function to diagonalize F <==
def diag_F(F, norb):
F_p = A.dot(F).dot(A)
e, C_p = np.linalg.eigh(F_p)
C = A.dot(C_p)
C_occ = C[:, :norb]
D = np.einsum('pi,qi->pq', C_occ, C_occ)
return (C, D)
###Output
_____no_output_____
###Markdown
Next, let's write a function to perform DIIS extrapolation and generate a new solution vector. Recall that the DIIS-accellerated SCF algorithm is: Algorithm 1: DIIS within a generic SCF Iteration1. Compute **F**, append to list of previous trial vectors2. Compute AO orbital gradient **r**, append to list of previous residual vectors3. Compute RHF energy3. Check convergence criteria - If RMSD of **r** sufficiently small, and - If change in SCF energy sufficiently small, break4. Build **B** matrix from previous AO gradient vectors5. Solve Pulay equation for coefficients $\{c_i\}$6. Compute DIIS solution vector **F_DIIS** from $\{c_i\}$ and previous trial vectors7. Compute new orbital guess with **F_DIIS**In our function, we will perform steps 4-6 of the above algorithm. What information will we need to provide our function in order to do so? To build **B** (step 4 above) in the previous tutorial, we used:~~~python Build B matrixB_dim = len(F_list) + 1B = np.empty((B_dim, B_dim))B[-1, :] = -1B[:, -1] = -1B[-1, -1] = 0for i in xrange(len(F_list)): for j in xrange(len(F_list)): B[i, j] = np.einsum('ij,ij->', DIIS_RESID[i], DIIS_RESID[j])~~~Here, we see that we must have all previous DIIS residual vectors (`DIIS_RESID`), as well as knowledge about how many previous trial vectors there are (for the dimension of **B**). To solve the Pulay equation (step 5 above):~~~python Build RHS of Pulay equation rhs = np.zeros((B_dim))rhs[-1] = -1 Solve Pulay equation for c_i's with NumPycoeff = np.linalg.solve(B, rhs)~~~For this step, we only need the dimension of **B** (which we computed in step 4 above) and a NumPy routine, so this step doesn't require any additional arguments. Finally, to build the DIIS Fock matrix (step 6):~~~python Build DIIS Fock matrixF = np.zeros_like(F_list[0])for x in xrange(coeff.shape[0] - 1): F += coeff[x] * F_list[x]~~~Clearly, for this step, we need to know all the previous trial vectors (`F_list`) and the coefficients we generated in the previous step. In the cell below, write a funciton `diis_xtrap()` according to Algorithm 1 steps 4-6, using the above code snippets, which takes a list of previous trial vectors `F_list` and residual vectors `DIIS_RESID` as arguments and returns the new DIIS solution vector `F_DIIS`:
###Code
# ==> Build DIIS Extrapolation Function <==
def diis_xtrap(F_list, DIIS_RESID):
# Build B matrix
B_dim = len(F_list) + 1
B = np.empty((B_dim, B_dim))
B[-1, :] = -1
B[:, -1] = -1
B[-1, -1] = 0
for i in range(len(F_list)):
for j in range(len(F_list)):
B[i, j] = np.einsum('ij,ij->', DIIS_RESID[i], DIIS_RESID[j])
# Build RHS of Pulay equation
rhs = np.zeros((B_dim))
rhs[-1] = -1
# Solve Pulay equation for c_i's with NumPy
coeff = np.linalg.solve(B, rhs)
# Build DIIS Fock matrix
F_DIIS = np.zeros_like(F_list[0])
for x in range(coeff.shape[0] - 1):
F_DIIS += coeff[x] * F_list[x]
return F_DIIS
###Output
_____no_output_____
###Markdown
We are now ready to begin writing our UHF program! Let's begin by importing Psi4 and NumPy, and defining our molecule & basic options:
###Code
# ==> Import Psi4 & NumPy <==
import psi4
import numpy as np
# ==> Set Basic Psi4 Options <==
# Memory specification
psi4.set_memory(int(5e8))
numpy_memory = 2
# Set output file
psi4.core.set_output_file('output.dat', False)
# Define Physicist's water -- don't forget C1 symmetry!
mol = psi4.geometry("""
O
H 1 1.1
H 1 1.1 2 104
symmetry c1
""")
# Set computation options
psi4.set_options({'guess': 'core',
'basis': 'cc-pvdz',
'scf_type': 'pk',
'e_convergence': 1e-8,
'reference': 'uhf'})
###Output
_____no_output_____
###Markdown
You may notice that in the above `psi4.set_options()` block, there are two additional options -- namely, `'guess': 'core'` and `'reference': 'uhf'`. These options make sure that when we ultimately check our program against Psi4, the options Psi4 uses are identical to our implementation. Next, let's define the options for our UHF program; we can borrow these options from our RHF implementation with DIIS accelleration that we completed in our last tutorial.
###Code
# ==> Set default program options <==
# Maximum SCF iterations
MAXITER = 40
# Energy convergence criterion
E_conv = 1.0e-6
D_conv = 1.0e-3
###Output
_____no_output_____
###Markdown
Static quantities like the ERI tensor, core Hamiltonian, and orthogonalization matrix have exactly the same form in UHF as in RHF. Unlike in RHF, however, we will need the number of $\alpha$ and $\beta$ electrons. Fortunately, both these values are available through querying the Wavefunction object. In the cell below, generate these static objects and compute each of the following:- Number of basis functions, `nbf`- Number of alpha electrons, `nalpha`- Number of beta electrons, `nbeta`- Number of doubly occupied orbitals, `ndocc` (Hint: In UHF, there can be unpaired electrons!)
###Code
# ==> Compute static 1e- and 2e- quantities with Psi4 <==
# Class instantiation
wfn = psi4.core.Wavefunction.build(mol, psi4.core.get_global_option('basis'))
mints = psi4.core.MintsHelper(wfn.basisset())
# Overlap matrix
S = np.asarray(mints.ao_overlap())
# Number of basis Functions, alpha & beta orbitals, and # doubly occupied orbitals
nbf = wfn.nso()
nalpha = wfn.nalpha()
nbeta = wfn.nbeta()
ndocc = min(nalpha, nbeta)
print('Number of basis functions: %d' % (nbf))
print('Number of singly occupied orbitals: %d' % (abs(nalpha - nbeta)))
print('Number of doubly occupied orbitals: %d' % (ndocc))
# Memory check for ERI tensor
I_size = (nbf**4) * 8.e-9
print('\nSize of the ERI tensor will be {:4.2f} GB.'.format(I_size))
memory_footprint = I_size * 1.5
if I_size > numpy_memory:
psi4.core.clean()
raise Exception("Estimated memory utilization (%4.2f GB) exceeds allotted memory \
limit of %4.2f GB." % (memory_footprint, numpy_memory))
# Build ERI Tensor
I = np.asarray(mints.ao_eri())
# Build core Hamiltonian
T = np.asarray(mints.ao_kinetic())
V = np.asarray(mints.ao_potential())
H = T + V
# Construct AO orthogonalization matrix A
A = mints.ao_overlap()
A.power(-0.5, 1.e-16)
A = np.asarray(A)
###Output
Number of basis functions: 24
Number of singly occupied orbitals: 0
Number of doubly occupied orbitals: 5
Size of the ERI tensor will be 0.00 GB.
###Markdown
Unlike the static quantities above, the CORE guess in UHF is slightly different than in RHF. Since the $\alpha$ and $\beta$ electrons do not share spatial orbitals, we must construct a guess for *each* of the $\alpha$ and $\beta$ orbitals and densities. In the cell below, using the function `diag_F()`, construct the CORE guesses and compute the nuclear repulsion energy:(Hint: The number of $\alpha$ orbitals is the same as the number of $\alpha$ electrons!)
###Code
# ==> Build alpha & beta CORE guess <==
Ca, Da = diag_F(H, nalpha)
Cb, Db = diag_F(H, nbeta)
# Get nuclear repulsion energy
E_nuc = mol.nuclear_repulsion_energy()
###Output
_____no_output_____
###Markdown
We are almost ready to perform our SCF iterations; beforehand, however, we must initiate variables for the current & previous SCF energies, and the lists to hold previous residual vectors and trial vectors for the DIIS procedure. Since, in UHF, there are Fock matrices ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$ for both $\alpha$ and $\beta$ orbitals, we must apply DIIS to each of these matrices separately. In the cell below, define empty lists to hold previous Fock matrices and residual vectors for both $\alpha$ and $\beta$ orbitals:
###Code
# ==> Pre-Iteration Setup <==
# SCF & Previous Energy
SCF_E = 0.0
E_old = 0.0
###Output
_____no_output_____
###Markdown
We are now ready to write the SCF iterations. The algorithm for UHF-SCF iteration, with DIIS convergence accelleration, is: Algorithm 2: DIIS within UHF-SCF Iteration1. Build ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$, append to trial vector lists2. Compute the DIIS residual for $\alpha$ and $\beta$, append to residual vector lists3. Compute UHF energy4. Convergence check - If average of RMSD of $\alpha$ and $\beta$ residual sufficiently small, and - If change in UHF energy sufficiently small, break5. DIIS extrapolation of ${\bf F}^{\alpha}$ and ${\bf F}^{\beta}$ to form new solution vector6. Compute new ${\alpha}$ and ${\beta}$ orbital & density guessesIn the cell below, write the UHF-SCF iteration according to Algorithm 2:(Hint: Use your functions `diis_xtrap()` and `diag_F` for Algorithm 2 steps 5 & 6, respectively)
###Code
# Trial & Residual Vector Lists -- one each for alpha & beta
F_list_a = []
F_list_b = []
R_list_a = []
R_list_b = []
# ==> UHF-SCF Iterations <==
print('==> Starting SCF Iterations <==\n')
# Begin Iterations
for scf_iter in range(MAXITER):
# Build Fa & Fb matrices
Ja = np.einsum('pqrs,rs->pq', I, Da)
Jb = np.einsum('pqrs,rs->pq', I, Db)
Ka = np.einsum('prqs,rs->pq', I, Da)
Kb = np.einsum('prqs,rs->pq', I, Db)
Fa = H + (Ja + Jb) - Ka
Fb = H + (Ja + Jb) - Kb
# Compute DIIS residual for Fa & Fb
diis_r_a = A.dot(Fa.dot(Da).dot(S) - S.dot(Da).dot(Fa)).dot(A)
diis_r_b = A.dot(Fb.dot(Db).dot(S) - S.dot(Db).dot(Fb)).dot(A)
# Append trial & residual vectors to lists
F_list_a.append(Fa)
F_list_b.append(Fb)
R_list_a.append(diis_r_a)
R_list_b.append(diis_r_b)
# Compute UHF Energy
SCF_E = np.einsum('pq,pq->', (Da + Db), H)
SCF_E += np.einsum('pq,pq->', Da, Fa)
SCF_E += np.einsum('pq,pq->', Db, Fb)
SCF_E *= 0.5
SCF_E += E_nuc
dE = SCF_E - E_old
dRMS = 0.5 * (np.mean(diis_r_a**2)**0.5 + np.mean(diis_r_b**2)**0.5)
print('SCF Iteration %3d: Energy = %4.16f dE = % 1.5E dRMS = %1.5E' % (scf_iter, SCF_E, dE, dRMS))
# Convergence Check
if (abs(dE) < E_conv) and (dRMS < D_conv):
break
E_old = SCF_E
# DIIS Extrapolation
if scf_iter >= 2:
Fa = diis_xtrap(F_list_a, R_list_a)
Fb = diis_xtrap(F_list_b, R_list_b)
# Compute new orbital guess
Ca, Da = diag_F(Fa, nalpha)
Cb, Db = diag_F(Fb, nbeta)
# MAXITER exceeded?
if (scf_iter == MAXITER):
psi4.core.clean()
raise Exception("Maximum number of SCF iterations exceeded.")
# Post iterations
print('\nSCF converged.')
print('Final UHF Energy: %.8f [Eh]' % SCF_E)
###Output
==> Starting SCF Iterations <==
SCF Iteration 0: Energy = -74.1207806468836452 dE = 0.00000E+00 dRMS = 8.64677E-02
SCF Iteration 1: Energy = -74.8671819457688485 dE = -7.46401E-01 dRMS = 6.52840E-02
SCF Iteration 2: Energy = -75.4149087803903342 dE = -5.47727E-01 dRMS = 5.21690E-02
SCF Iteration 3: Energy = -75.9800488561561309 dE = -5.65140E-01 dRMS = 6.34267E-03
SCF Iteration 4: Energy = -75.9894383301614340 dE = -9.38947E-03 dRMS = 5.45826E-04
SCF Iteration 5: Energy = -75.9897683674259383 dE = -3.30037E-04 dRMS = 1.70671E-04
SCF Iteration 6: Energy = -75.9897948623176376 dE = -2.64949E-05 dRMS = 4.28126E-05
SCF Iteration 7: Energy = -75.9897957712875609 dE = -9.08970E-07 dRMS = 5.40285E-06
SCF converged.
Final UHF Energy: -75.98979577 [Eh]
###Markdown
Congratulations! You've written your very own Unrestricted Hartree-Fock program with DIIS convergence accelleration! Finally, let's check your final UHF energy against Psi4:
###Code
# Compare to Psi4
SCF_E_psi = psi4.energy('SCF')
psi4.compare_values(SCF_E_psi, SCF_E, 6, 'SCF Energy')
###Output
SCF Energy........................................................PASSED
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.