path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Notebooks/Time Series Analysis and Forecasting.ipynb
|
###Markdown
__Time series forecasting__ is the use of a model to predict future values based on previously observed values.
###Code
import warnings
import itertools
import numpy as np
import matplotlib.pyplot as plt
warnings.filterwarnings("ignore")
plt.style.use('fivethirtyeight')
import pandas as pd
import statsmodels.api as sm
import matplotlib
%matplotlib inline
matplotlib.rcParams['axes.labelsize'] = 14
matplotlib.rcParams['xtick.labelsize'] = 12
matplotlib.rcParams['ytick.labelsize'] = 12
matplotlib.rcParams['text.color'] = 'k'
df = pd.read_excel("Sample - Superstore.xls")
df.head()
furniture = df.loc[df['Category'] == 'Furniture']
furniture.head()
furniture['Order Date'].min(), furniture['Order Date'].max()
###Output
_____no_output_____
###Markdown
We have a good 4-year furniture sales data. EDA and Data processing
###Code
cols = ['Row ID', 'Order ID', 'Ship Date', 'Ship Mode', 'Customer ID', 'Customer Name', 'Segment', 'Country', 'City', 'State',
'Postal Code', 'Region', 'Product ID', 'Category', 'Sub-Category', 'Product Name', 'Quantity', 'Discount', 'Profit']
furniture.drop(cols, axis=1, inplace=True)
furniture=furniture.sort_values('Order Date')
furniture.isnull().sum()
furniture=furniture.groupby('Order Date')['Sales'].sum().reset_index()
furniture.head()
###Output
_____no_output_____
###Markdown
Indexing with Time Series Data
###Code
furniture = furniture.set_index('Order Date')
furniture.index
###Output
_____no_output_____
###Markdown
Will use the averages daily sales value for that month instead, and we are using the start of each month as the timestamp.
###Code
y = furniture['Sales'].resample('MS').mean()
y['2017':]
###Output
_____no_output_____
###Markdown
Visualizing Furniture Sales Time Series Data
###Code
y.plot(figsize=(15, 6))
plt.show()
###Output
_____no_output_____
###Markdown
Some distinguishable patterns appear when we plot the data. The time-series has seasonality pattern, such as sales are always low at the beginning of the year and high at the end of the year. We can also visualize our data using a method called time-series decomposition that allows us to decompose our time series into three distinct components: trend, seasonality, and noise.
###Code
from pylab import rcParams
rcParams['figure.figsize'] = 18, 8
decomposition = sm.tsa.seasonal_decompose(y, model='additive')
fig = decomposition.plot()
plt.show()
###Output
_____no_output_____
###Markdown
The plot above clearly shows that the sales of furniture is unstable Time series forecasting with ARIMA We are going to apply one of the most commonly used method for time-series forecasting, known as ARIMA, which stands for Autoregressive Integrated Moving Average. ARIMA models are denoted with the notation ARIMA(p, d, q). These three parameters account for seasonality, trend, and noise in data:
###Code
p = d = q = range(0, 2)
pdq = list(itertools.product(p, d, q))
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
print('Examples of parameter combinations for Seasonal ARIMA...')
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1]))
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4]))
###Output
Examples of parameter combinations for Seasonal ARIMA...
SARIMAX: (0, 0, 1) x (0, 0, 1, 12)
SARIMAX: (0, 0, 1) x (0, 1, 0, 12)
SARIMAX: (0, 1, 0) x (0, 1, 1, 12)
SARIMAX: (0, 1, 0) x (1, 0, 0, 12)
###Markdown
This step is parameter Selection for our furniture’s sales ARIMA Time Series Model. Our goal here is to use a “grid search” to find the optimal set of parameters that yields the best performance for our model.
###Code
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
mod = sm.tsa.statespace.SARIMAX(y,
order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print('ARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic))
except:
continue
###Output
ARIMA(0, 0, 0)x(0, 0, 1, 12)12 - AIC:1446.5593227130305
ARIMA(0, 0, 0)x(1, 0, 0, 12)12 - AIC:497.23144334183365
ARIMA(0, 0, 0)x(1, 0, 1, 12)12 - AIC:1172.208674145885
ARIMA(0, 0, 0)x(1, 1, 0, 12)12 - AIC:318.0047199116341
ARIMA(0, 0, 1)x(0, 0, 0, 12)12 - AIC:720.9252270758095
ARIMA(0, 0, 1)x(0, 0, 1, 12)12 - AIC:2900.357535652858
ARIMA(0, 0, 1)x(0, 1, 0, 12)12 - AIC:466.56074298091255
ARIMA(0, 0, 1)x(1, 0, 0, 12)12 - AIC:499.574045803366
ARIMA(0, 0, 1)x(1, 0, 1, 12)12 - AIC:2513.1394870316744
ARIMA(0, 0, 1)x(1, 1, 0, 12)12 - AIC:319.98848769468657
ARIMA(0, 1, 0)x(0, 0, 1, 12)12 - AIC:1250.2320272227237
ARIMA(0, 1, 0)x(1, 0, 0, 12)12 - AIC:497.78896630044073
ARIMA(0, 1, 0)x(1, 0, 1, 12)12 - AIC:1550.2003231687213
ARIMA(0, 1, 0)x(1, 1, 0, 12)12 - AIC:319.7714068109211
ARIMA(0, 1, 1)x(0, 0, 0, 12)12 - AIC:649.9056176816999
ARIMA(0, 1, 1)x(0, 0, 1, 12)12 - AIC:2683.886393076119
ARIMA(0, 1, 1)x(0, 1, 0, 12)12 - AIC:458.8705548482932
ARIMA(0, 1, 1)x(1, 0, 0, 12)12 - AIC:486.18329774427826
ARIMA(0, 1, 1)x(1, 0, 1, 12)12 - AIC:3144.981130223559
ARIMA(0, 1, 1)x(1, 1, 0, 12)12 - AIC:310.75743684172994
ARIMA(1, 0, 0)x(0, 0, 0, 12)12 - AIC:692.1645522067712
ARIMA(1, 0, 0)x(0, 0, 1, 12)12 - AIC:1343.1777877543473
ARIMA(1, 0, 0)x(0, 1, 0, 12)12 - AIC:479.46321478521355
ARIMA(1, 0, 0)x(1, 0, 0, 12)12 - AIC:480.92593679352177
ARIMA(1, 0, 0)x(1, 0, 1, 12)12 - AIC:1243.8088413604426
ARIMA(1, 0, 0)x(1, 1, 0, 12)12 - AIC:304.4664675084554
ARIMA(1, 0, 1)x(0, 0, 0, 12)12 - AIC:665.779444218685
ARIMA(1, 0, 1)x(0, 0, 1, 12)12 - AIC:82073.66352065578
ARIMA(1, 0, 1)x(0, 1, 0, 12)12 - AIC:468.3685195814987
ARIMA(1, 0, 1)x(1, 0, 0, 12)12 - AIC:482.5763323876739
ARIMA(1, 0, 1)x(1, 0, 1, 12)12 - AIC:nan
ARIMA(1, 0, 1)x(1, 1, 0, 12)12 - AIC:306.0156002122138
ARIMA(1, 1, 0)x(0, 0, 0, 12)12 - AIC:671.2513547541902
ARIMA(1, 1, 0)x(0, 0, 1, 12)12 - AIC:1205.945960251849
ARIMA(1, 1, 0)x(0, 1, 0, 12)12 - AIC:479.2003422281134
ARIMA(1, 1, 0)x(1, 0, 0, 12)12 - AIC:475.34036587848493
ARIMA(1, 1, 0)x(1, 0, 1, 12)12 - AIC:1269.52639945458
ARIMA(1, 1, 0)x(1, 1, 0, 12)12 - AIC:300.6270901345443
ARIMA(1, 1, 1)x(0, 0, 0, 12)12 - AIC:649.0318019835024
ARIMA(1, 1, 1)x(0, 0, 1, 12)12 - AIC:101786.44160210912
ARIMA(1, 1, 1)x(0, 1, 0, 12)12 - AIC:460.4762687610111
ARIMA(1, 1, 1)x(1, 0, 0, 12)12 - AIC:469.52503546608614
ARIMA(1, 1, 1)x(1, 0, 1, 12)12 - AIC:2651.570039388935
ARIMA(1, 1, 1)x(1, 1, 0, 12)12 - AIC:297.7875439553055
###Markdown
The above output suggests that SARIMAX(1, 1, 1)x(1, 1, 0, 12) yields the lowest AIC value of 297.78. Therefore we should consider this to be optimal option. Fitting the ARIMA model
###Code
mod = sm.tsa.statespace.SARIMAX(y,
order=(1, 1, 1),
seasonal_order=(1, 1, 0, 12),
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print(results.summary().tables[1])
###Output
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 0.0146 0.342 0.043 0.966 -0.655 0.684
ma.L1 -1.0000 0.360 -2.781 0.005 -1.705 -0.295
ar.S.L12 -0.0253 0.042 -0.609 0.543 -0.107 0.056
sigma2 2.958e+04 1.22e-05 2.43e+09 0.000 2.96e+04 2.96e+04
==============================================================================
###Markdown
Run model diagnostics to investigate any unusual behavior.
###Code
results.plot_diagnostics(figsize=(16, 8))
plt.show()
###Output
_____no_output_____
###Markdown
Model diagnostics suggests that the model residuals are near normally distributed. Validating forecasts To help us understand the accuracy of our forecasts, we compare predicted sales to real sales of the time series, and we set forecasts to start at 2017–01–01 to the end of the data.
###Code
pred = results.get_prediction(start=pd.to_datetime('2017-01-01'), dynamic=False)
pred_ci = pred.conf_int()
ax = y['2014':].plot(label='observed')
pred.predicted_mean.plot(ax=ax, label='One-step ahead Forecast', alpha=.7, figsize=(14, 7))
ax.fill_between(pred_ci.index,
pred_ci.iloc[:, 0],
pred_ci.iloc[:, 1], color='k', alpha=.2)
ax.set_xlabel('Date')
ax.set_ylabel('Furniture Sales')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Overall, our forecasts align with the true values very well, showing an upward trend starts from the beginning of the year and captured the seasonality toward the end of the year.
###Code
y_forecasted = pred.predicted_mean
y_truth = y['2017-01-01':]
mse = ((y_forecasted - y_truth) ** 2).mean()
print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2)))
print('The Root Mean Squared Error of our forecasts is {}'.format(round(np.sqrt(mse), 2)))
###Output
The Root Mean Squared Error of our forecasts is 151.64
###Markdown
In statistics, the mean squared error (MSE) of an estimator measures the average of the squares of the errors — that is, the average squared difference between the estimated values and what is estimated. The MSE is a measure of the quality of an estimator — it is always non-negative, and the smaller the MSE, the closer we are to finding the line of best fit.Root Mean Square Error (RMSE) tells us that our model was able to forecast the average daily furniture sales in the test set within 151.64 of the real sales. Our furniture daily sales range from around 400 to over 1200. This is a pretty good model so far.
###Code
pred_uc = results.get_forecast(steps=100)
pred_ci = pred_uc.conf_int()
ax = y.plot(label='observed', figsize=(14, 7))
pred_uc.predicted_mean.plot(ax=ax, label='Forecast')
ax.fill_between(pred_ci.index,
pred_ci.iloc[:, 0],
pred_ci.iloc[:, 1], color='k', alpha=.25)
ax.set_xlabel('Date')
ax.set_ylabel('Furniture Sales')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Time Series Modeling with Prophet Released by Facebook in 2017, forecasting tool Prophet is designed for analyzing time-series that display patterns on different time scales such as yearly, weekly and daily. It also has advanced capabilities for modeling the effects of holidays on a time-series and implementing custom changepoints. Therefore, we are using Prophet to get a model up and running.https://blog.exploratory.io/an-introduction-to-time-series-forecasting-with-prophet-package-in-exploratory-129ed0c12112
###Code
furniture = df.loc[df['Category'] == 'Furniture']
cols = ['Row ID', 'Order ID', 'Ship Date', 'Ship Mode', 'Customer ID', 'Customer Name', 'Segment', 'Country', 'City', 'State', 'Postal Code', 'Region', 'Product ID', 'Category', 'Sub-Category', 'Product Name', 'Quantity', 'Discount', 'Profit']
furniture.drop(cols, axis=1, inplace=True)
furniture = furniture.sort_values('Order Date')
furniture = furniture.groupby('Order Date')['Sales'].sum().reset_index()
furniture = furniture.set_index('Order Date')
y_furniture = furniture['Sales'].resample('MS').mean()
furniture = pd.DataFrame({'Order Date':y_furniture.index, 'Sales':y_furniture.values})
from fbprophet import Prophet
furniture = furniture.rename(columns={'Order Date': 'ds', 'Sales': 'y'})
furniture.head()
furniture_model = Prophet(interval_width=0.95)
furniture_model.fit(furniture)
furniture_forecast = furniture_model.make_future_dataframe(periods=36, freq='MS')
furniture_forecast.tail()
furniture_forecast = furniture_model.predict(furniture_forecast)
furniture_forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
plt.figure(figsize=(7, 5))
furniture_model.plot(furniture_forecast, xlabel = 'Date', ylabel = 'Sales')
plt.title('Furniture Sales')
#furniture_model.plot(furniture_forecast,
# uncertainty=True)
furniture_model.plot_components(furniture_forecast)
###Output
_____no_output_____
|
docs/source/examples/plotting/density.ipynb
|
###Markdown
Density and Contour Plots==================================While individual point data are useful, we commonly want to understand thethe distribution of our data within a particular subspace, and compare thatto a reference or other dataset. Pyrolite includes a few functions forvisualising data density, most based on Gaussian kernel density estimationand evaluation over a grid. The below examples highlight some of the currentlyimplemented features.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pyrolite.plot import pyroplot
from pyrolite.plot.density import density
from pyrolite.comp.codata import close
np.random.seed(82)
###Output
_____no_output_____
###Markdown
First we create some example data :
###Code
oxs = ["SiO2", "CaO", "MgO", "Na2O"]
ys = np.random.rand(1000, len(oxs))
ys[:, 1] += 0.7
ys[:, 2] += 1.0
df = pd.DataFrame(data=close(np.exp(ys)), columns=oxs)
###Output
_____no_output_____
###Markdown
A minimal density plot can be constructed as follows:
###Code
ax = df.loc[:, ["SiO2", "MgO"]].pyroplot.density()
df.loc[:, ["SiO2", "MgO"]].pyroplot.scatter(ax=ax, s=10, alpha=0.3, c="k", zorder=2)
plt.show()
###Output
_____no_output_____
###Markdown
A colorbar linked to the KDE estimate colormap can be added using the `colorbar`boolean switch:
###Code
ax = df.loc[:, ["SiO2", "MgO"]].pyroplot.density(colorbar=True)
plt.show()
###Output
_____no_output_____
###Markdown
`density` by default will create a new axis, but can also be plotted over anexisting axis for more control:
###Code
fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(12, 5))
df.loc[:, ["SiO2", "MgO"]].pyroplot.density(ax=ax[0])
df.loc[:, ["SiO2", "CaO"]].pyroplot.density(ax=ax[1])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Contours are also easily created, which by default are percentile values:
###Code
ax = df.loc[:, ["SiO2", "CaO"]].pyroplot.scatter(s=10, alpha=0.3, c="k", zorder=2)
df.loc[:, ["SiO2", "CaO"]].pyroplot.density(ax=ax, contours=[0.95, 0.66, 0.33])
plt.show()
###Output
_____no_output_____
###Markdown
Geochemical data is commonly log-normally distributed and is best analysedand visualised after log-transformation. The density estimation can be conductedover logspaced grids (individually for x and y axes using `logx` and `logy` booleanswitches). Notably, this makes both the KDE image and contours behave more naturally:
###Code
# some assymetric data
from scipy import stats
xs = stats.norm.rvs(loc=6, scale=3, size=(200, 1))
ys = stats.norm.rvs(loc=20, scale=3, size=(200, 1)) + 5 * xs + 50
data = np.append(xs, ys, axis=1).T
asym_df = pd.DataFrame(np.exp(np.append(xs, ys, axis=1) / 25.0))
asym_df.columns = ["A", "B"]
grids = ["linxy", "logxy"] * 2 + ["logx", "logy"]
scales = ["linscale"] * 2 + ["logscale"] * 2 + ["semilogx", "semilogy"]
labels = ["{}-{}".format(ls, ps) for (ls, ps) in zip(grids, scales)]
params = list(
zip(
[
(False, False),
(True, True),
(False, False),
(True, True),
(True, False),
(False, True),
],
grids,
scales,
)
)
fig, ax = plt.subplots(3, 2, figsize=(8, 8))
ax = ax.flat
for a, (ls, grid, scale) in zip(ax, params):
lx, ly = ls
asym_df.pyroplot.density(ax=a, logx=lx, logy=ly, bins=30, cmap="viridis_r")
asym_df.pyroplot.density(
ax=a,
logx=lx,
logy=ly,
contours=[0.95, 0.5],
bins=30,
cmap="viridis",
fontsize=10,
)
asym_df.pyroplot.scatter(ax=a, s=10, alpha=0.3, c="k", zorder=2)
a.set_title("{}-{}".format(grid, scale), fontsize=10)
if scale in ["logscale", "semilogx"]:
a.set_xscale("log")
if scale in ["logscale", "semilogy"]:
a.set_yscale("log")
plt.tight_layout()
plt.show()
plt.close("all") # let's save some memory..
###Output
_____no_output_____
###Markdown
There are two other implemented modes beyond the default `density`: `hist2d` and`hexbin`, which parallel their equivalents in matplotlib.Contouring is not enabled for these histogram methods.
###Code
fig, ax = plt.subplots(1, 3, sharex=True, sharey=True, figsize=(14, 5))
for a, mode in zip(ax, ["density", "hexbin", "hist2d"]):
df.loc[:, ["SiO2", "CaO"]].pyroplot.density(ax=a, mode=mode)
a.set_title("Mode: {}".format(mode))
plt.show()
###Output
_____no_output_____
###Markdown
For the ``density`` mode, a ``vmin`` parameter is used to choose the lowerthreshold, and by default is the 99th percentile (``vmin=0.01``), but can beadjusted. This is useful where there are a number of outliers, or where you wish toreduce the overall complexity/colour intensity of a figure (also good for printing!).
###Code
fig, ax = plt.subplots(1, 3, figsize=(14, 4))
for a, vmin in zip(ax, [0.01, 0.1, 0.4]):
df.loc[:, ["SiO2", "CaO"]].pyroplot.density(ax=a, bins=30, vmin=vmin, colorbar=True)
plt.tight_layout()
plt.show()
plt.close("all") # let's save some memory..
###Output
_____no_output_____
###Markdown
Density plots can also be used for ternary diagrams, where more than two componentsare specified:
###Code
fig, ax = plt.subplots(
1,
3,
sharex=True,
sharey=True,
figsize=(15, 5),
subplot_kw=dict(projection="ternary"),
)
df.loc[:, ["SiO2", "CaO", "MgO"]].pyroplot.scatter(ax=ax[0], alpha=0.05, c="k")
for a, mode in zip(ax[1:], ["hist", "density"]):
df.loc[:, ["SiO2", "CaO", "MgO"]].pyroplot.density(ax=a, mode=mode)
a.set_title("Mode: {}".format(mode), y=1.2)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Density and Contour Plots==================================While individual point data are useful, we commonly want to understand thethe distribution of our data within a particular subspace, and compare thatto a reference or other dataset. Pyrolite includes a few functions forvisualising data density, most based on Gaussian kernel density estimationand evaluation over a grid. The below examples highlight some of the currentlyimplemented features.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pyrolite.plot import pyroplot
from pyrolite.plot.density import density
from pyrolite.comp.codata import close
np.random.seed(82)
###Output
_____no_output_____
###Markdown
First we create some example data :
###Code
oxs = ["SiO2", "CaO", "MgO", "Na2O"]
ys = np.random.rand(1000, len(oxs))
ys[:, 1] += 0.7
ys[:, 2] += 1.0
df = pd.DataFrame(data=close(np.exp(ys)), columns=oxs)
###Output
_____no_output_____
###Markdown
A minimal density plot can be constructed as follows:
###Code
ax = df.loc[:, ["SiO2", "MgO"]].pyroplot.density()
df.loc[:, ["SiO2", "MgO"]].pyroplot.scatter(ax=ax, s=10, alpha=0.3, c="k", zorder=2)
plt.show()
###Output
_____no_output_____
###Markdown
A colorbar linked to the KDE estimate colormap can be added using the `colorbar`boolean switch:
###Code
ax = df.loc[:, ["SiO2", "MgO"]].pyroplot.density(colorbar=True)
plt.show()
###Output
_____no_output_____
###Markdown
`density` by default will create a new axis, but can also be plotted over anexisting axis for more control:
###Code
fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(12, 5))
df.loc[:, ["SiO2", "MgO"]].pyroplot.density(ax=ax[0])
df.loc[:, ["SiO2", "CaO"]].pyroplot.density(ax=ax[1])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Contours are also easily created, which by default are percentile values:
###Code
ax = df.loc[:, ["SiO2", "CaO"]].pyroplot.scatter(s=10, alpha=0.3, c="k", zorder=2)
df.loc[:, ["SiO2", "CaO"]].pyroplot.density(ax=ax, contours=[0.95, 0.66, 0.33])
plt.show()
###Output
_____no_output_____
###Markdown
Geochemical data is commonly log-normally distributed and is best analysedand visualised after log-transformation. The density estimation can be conductedover logspaced grids (individually for x and y axes using `logx` and `logy` booleanswitches). Notably, this makes both the KDE image and contours behave more naturally:
###Code
# some assymetric data
from scipy import stats
xs = stats.norm.rvs(loc=6, scale=3, size=(200, 1))
ys = stats.norm.rvs(loc=20, scale=3, size=(200, 1)) + 5 * xs + 50
data = np.append(xs, ys, axis=1).T
asym_df = pd.DataFrame(np.exp(np.append(xs, ys, axis=1) / 25.0))
asym_df.columns = ["A", "B"]
grids = ["linxy", "logxy"] * 2 + ["logx", "logy"]
scales = ["linscale"] * 2 + ["logscale"] * 2 + ["semilogx", "semilogy"]
labels = ["{}-{}".format(ls, ps) for (ls, ps) in zip(grids, scales)]
params = list(
zip(
[
(False, False),
(True, True),
(False, False),
(True, True),
(True, False),
(False, True),
],
grids,
scales,
)
)
fig, ax = plt.subplots(3, 2, figsize=(8, 8))
ax = ax.flat
for a, (ls, grid, scale) in zip(ax, params):
lx, ly = ls
asym_df.pyroplot.density(ax=a, logx=lx, logy=ly, bins=30, cmap="viridis_r")
asym_df.pyroplot.density(
ax=a,
logx=lx,
logy=ly,
contours=[0.95, 0.5],
bins=30,
cmap="viridis",
fontsize=10,
)
asym_df.pyroplot.scatter(ax=a, s=10, alpha=0.3, c="k", zorder=2)
a.set_title("{}-{}".format(grid, scale), fontsize=10)
if scale in ["logscale", "semilogx"]:
a.set_xscale("log")
if scale in ["logscale", "semilogy"]:
a.set_yscale("log")
plt.tight_layout()
plt.show()
plt.close("all") # let's save some memory..
###Output
_____no_output_____
###Markdown
There are two other implemented modes beyond the default `density`: `hist2d` and`hexbin`, which parallel their equivalents in matplotlib.Contouring is not enabled for these histogram methods.
###Code
fig, ax = plt.subplots(1, 3, sharex=True, sharey=True, figsize=(14, 5))
for a, mode in zip(ax, ["density", "hexbin", "hist2d"]):
df.loc[:, ["SiO2", "CaO"]].pyroplot.density(ax=a, mode=mode)
a.set_title("Mode: {}".format(mode))
plt.show()
###Output
_____no_output_____
###Markdown
For the ``density`` mode, a ``vmin`` parameter is used to choose the lowerthreshold, and by default is the 99th percentile (``vmin=0.01``), but can beadjusted. This is useful where there are a number of outliers, or where you wish toreduce the overall complexity/colour intensity of a figure (also good for printing!).
###Code
fig, ax = plt.subplots(1, 3, figsize=(14, 4))
for a, vmin in zip(ax, [0.01, 0.1, 0.4]):
df.loc[:, ["SiO2", "CaO"]].pyroplot.density(ax=a, bins=30, vmin=vmin, colorbar=True)
plt.tight_layout()
plt.show()
plt.close("all") # let's save some memory..
###Output
_____no_output_____
###Markdown
Density plots can also be used for ternary diagrams, where more than two componentsare specified:
###Code
fig, ax = plt.subplots(
1,
3,
sharex=True,
sharey=True,
figsize=(15, 5),
subplot_kw=dict(projection="ternary"),
)
df.loc[:, ["SiO2", "CaO", "MgO"]].pyroplot.scatter(ax=ax[0], alpha=0.05, c="k")
for a, mode in zip(ax[1:], ["hist", "density"]):
df.loc[:, ["SiO2", "CaO", "MgO"]].pyroplot.density(ax=a, mode=mode)
a.set_title("Mode: {}".format(mode), y=1.2)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
|
examples/lints_reproducibility/table_2_3/Analysis.ipynb
|
###Markdown
MovieLens Lambda = 10
###Code
mac = pickle.load(open(os.path.join('output', 'mac_ml_expectations2.pkl'), 'rb'))
win = pickle.load(open(os.path.join('output', 'win_ml_expectations2.pkl'), 'rb'))
sgm = pickle.load(open(os.path.join('output', 'sgm_ml_expectations2.pkl'), 'rb'))
rh = pickle.load(open(os.path.join('output', 'rh_ml_expectations2.pkl'), 'rb'))
dar = pickle.load(open(os.path.join('output', 'dar_ml_expectations2.pkl'), 'rb'))
responses = pd.read_csv('movielens_responses.csv')
responses.head()
users = pd.read_csv('movielens_users.csv')
test = users[users['set']=='test']
test.head()
# test = test.merge(responses, how='left', on='user id')
test = test[['user id']]
test.head()
test.reset_index(inplace=True, drop=True)
test.shape
mac_df = pd.DataFrame(mac)
win_df = pd.DataFrame(win)
sgm_df = pd.DataFrame(sgm)
rh_df = pd.DataFrame(rh)
dar_df = pd.DataFrame(dar)
mac_df = mac_df.merge(test, how='left', left_index=True, right_index=True)
win_df = win_df.merge(test, how='left', left_index=True, right_index=True)
sgm_df = sgm_df.merge(test, how='left', left_index=True, right_index=True)
rh_df = rh_df.merge(test, how='left', left_index=True, right_index=True)
dar_df = dar_df.merge(test, how='left', left_index=True, right_index=True)
mac_df.head()
mac_df.shape
mac_df = mac_df.melt(id_vars=['user id'], var_name='item id', value_name='raw_score')
win_df = win_df.melt(id_vars=['user id'], var_name='item id', value_name='raw_score')
sgm_df = sgm_df.melt(id_vars=['user id'], var_name='item id', value_name='raw_score')
rh_df = rh_df.melt(id_vars=['user id'], var_name='item id', value_name='raw_score')
dar_df = dar_df.melt(id_vars=['user id'], var_name='item id', value_name='raw_score')
mac_df.shape
mac_df.head()
test = users[users['set']=='test']
test = test.merge(responses, how='left', on='user id')
test = test[['user id', 'item id', 'rated']]
test.head()
mac_df = mac_df.merge(test, how='left', on=['user id', 'item id'])
win_df = win_df.merge(test, how='left', on=['user id', 'item id'])
sgm_df = sgm_df.merge(test, how='left', on=['user id', 'item id'])
rh_df = rh_df.merge(test, how='left', on=['user id', 'item id'])
dar_df = dar_df.merge(test, how='left', on=['user id', 'item id'])
mac_df.head()
mac_df.shape
mac_df['probability'] = mac_df.apply(lambda x: 1/(1 + np.exp(-x['raw_score'])), axis=1)
win_df['probability'] = win_df.apply(lambda x: 1/(1 + np.exp(-x['raw_score'])), axis=1)
sgm_df['probability'] = sgm_df.apply(lambda x: 1/(1 + np.exp(-x['raw_score'])), axis=1)
rh_df['probability'] = rh_df.apply(lambda x: 1/(1 + np.exp(-x['raw_score'])), axis=1)
dar_df['probability'] = dar_df.apply(lambda x: 1/(1 + np.exp(-x['raw_score'])), axis=1)
mac_df['pred'] = mac_df['probability'] >= 0.5
win_df['pred'] = win_df['probability'] >= 0.5
sgm_df['pred'] = sgm_df['probability'] >= 0.5
rh_df['pred'] = rh_df['probability'] >= 0.5
dar_df['pred'] = dar_df['probability'] >= 0.5
###Output
_____no_output_____
###Markdown
Accuracy, Precision, Recall, and AUC
###Code
scores = [('Mac', mac_df), ('Windows', win_df), ('SageMaker', sgm_df), ('Red Hat', rh_df), ('Darwin', dar_df)]
for name, df in scores:
acc = BinaryClassificationMetrics.Accuracy().get_score(df['pred'].tolist(), df['rated'].tolist())
f1 = BinaryClassificationMetrics.F1().get_score(df['pred'].tolist(), df['rated'].tolist())
precision = BinaryClassificationMetrics.Precision().get_score(df['pred'].tolist(), df['rated'].tolist())
recall = BinaryClassificationMetrics.Recall().get_score(df['pred'].tolist(), df['rated'].tolist())
auc = BinaryClassificationMetrics.AUC().get_score(df['pred'].tolist(), df['rated'].tolist())
print(name, ':', acc, f1, precision, recall, auc)
###Output
Mac : 0.501695356781217 0.12608596334804617 0.5373889011023523 0.07142171410444241 0.5045598561174798
Windows : 0.5032772696142486 0.12800247832388595 0.5450205709619673 0.07251681918850027 0.5056558645575786
SageMaker : 0.501695356781217 0.12608596334804617 0.5373889011023523 0.07142171410444241 0.5045598561174798
Red Hat : 0.501695356781217 0.12608596334804617 0.5373889011023523 0.07142171410444241 0.5045598561174798
Darwin : 0.5024852627908052 0.12659600362904116 0.5390220156402123 0.07172018971605273 0.504854733663946
###Markdown
Table 3, NDCG, Precision, Recall Columns
###Code
for name, df in scores:
actual = df[['user id', 'item id', 'rated']].drop_duplicates(subset=['user id', 'item id'])
predicted = df[['user id', 'item id', 'probability']].drop_duplicates(subset=['user id', 'item id'])
predicted.columns = actual.columns
user_id_column = 'user id'
item_id_column = 'item id'
click_column = 'rated'
ctr = BinaryRecoMetrics.CTR(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=5).get_score(actual, predicted)
ctr5 = BinaryRecoMetrics.CTR(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=10).get_score(actual, predicted)
ctr10 = BinaryRecoMetrics.CTR(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=25).get_score(actual, predicted)
ndcg = RankingRecoMetrics.NDCG(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=5).get_score(actual, predicted)
ndcg5 = RankingRecoMetrics.NDCG(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=10).get_score(actual, predicted)
ndcg10 = RankingRecoMetrics.NDCG(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=25).get_score(actual, predicted)
precision = RankingRecoMetrics.Precision(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=5).get_score(actual, predicted)
precision5 = RankingRecoMetrics.Precision(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=10).get_score(actual, predicted)
precision10 = RankingRecoMetrics.Precision(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=25).get_score(actual, predicted)
recall = RankingRecoMetrics.Recall(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=5).get_score(actual, predicted)
recall5 = RankingRecoMetrics.Recall(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=10).get_score(actual, predicted)
recall10 = RankingRecoMetrics.Recall(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=25).get_score(actual, predicted)
print(name, ' CTR:', ctr, ctr5, ctr10)
print(name, ' NDCG:', ndcg, ndcg5, ndcg10)
print(name, ' Precision:', precision, precision5, precision10)
print(name, ' Recall:', recall, recall5, recall10)
###Output
Mac CTR: 0.13992932862190813 0.13003533568904593 0.11745583038869258
Mac NDCG: 0.020055892236522488 0.02861634089516421 0.0456816767701652
Mac Precision: 0.1399293286219081 0.13003533568904593 0.11745583038869259
Mac Recall: 0.006976173533242761 0.01326671783566324 0.029730586067924853
Windows CTR: 0.12508833922261484 0.12084805653710247 0.11024734982332156
Windows NDCG: 0.016618181384219898 0.02437742878523649 0.03989211350220072
Windows Precision: 0.12508833922261486 0.12084805653710248 0.11024734982332156
Windows Recall: 0.006376922817444793 0.011798747365181233 0.026500820459477246
SageMaker CTR: 0.13992932862190813 0.13003533568904593 0.11745583038869258
SageMaker NDCG: 0.020055892236522488 0.02861634089516421 0.0456816767701652
SageMaker Precision: 0.1399293286219081 0.13003533568904593 0.11745583038869259
SageMaker Recall: 0.006976173533242761 0.01326671783566324 0.029730586067924853
Red Hat CTR: 0.13992932862190813 0.13003533568904593 0.11745583038869258
Red Hat NDCG: 0.020055892236522488 0.02861634089516421 0.0456816767701652
Red Hat Precision: 0.1399293286219081 0.13003533568904593 0.11745583038869259
Red Hat Recall: 0.006976173533242761 0.01326671783566324 0.029730586067924853
Darwin CTR: 0.13356890459363957 0.1265017667844523 0.11236749116607773
Darwin NDCG: 0.018112975225015742 0.02669566428951683 0.04183491122926905
Darwin Precision: 0.1335689045936396 0.1265017667844523 0.11236749116607775
Darwin Recall: 0.007081857329930779 0.013209747751987 0.027139417179293066
###Markdown
Table 2, Score, Probability, and Prediction Comparisons
###Code
results = []
env = ['Mac', 'Windows', 'SageMaker', 'Red Hat', 'Darwin']
for name1, df1 in scores:
for name2, df2 in scores:
if name1 != name2:
sim1 = np.isclose(df1['raw_score'], df2['raw_score']).sum()
sim2 = np.isclose(df1['probability'], df2['probability']).sum()
sim3 = np.isclose(df1['pred'], df2['pred']).sum()
results.append((name1, name2, sim1, sim2, sim3))
results
###Output
_____no_output_____
###Markdown
Cholesky
###Code
mac = pickle.load(open(os.path.join('output', 'mac_ml_ch_expectations2.pkl'), 'rb'))
win = pickle.load(open(os.path.join('output', 'win_ml_ch_expectations2.pkl'), 'rb'))
sgm = pickle.load(open(os.path.join('output', 'sgm_ml_ch_expectations2.pkl'), 'rb'))
rh = pickle.load(open(os.path.join('output', 'rh_ml_ch_expectations2.pkl'), 'rb'))
dar = pickle.load(open(os.path.join('output', 'dar_ml_ch_expectations2.pkl'), 'rb'))
responses = pd.read_csv('movielens_responses.csv')
responses.head()
users = pd.read_csv('movielens_users.csv')
test = users[users['set']=='test']
test.head()
test = test[['user id']]
test.head()
test.reset_index(inplace=True, drop=True)
mac_df = pd.DataFrame(mac)
win_df = pd.DataFrame(win)
sgm_df = pd.DataFrame(sgm)
rh_df = pd.DataFrame(rh)
dar_df = pd.DataFrame(dar)
mac_df = mac_df.merge(test, how='left', left_index=True, right_index=True)
win_df = win_df.merge(test, how='left', left_index=True, right_index=True)
sgm_df = sgm_df.merge(test, how='left', left_index=True, right_index=True)
rh_df = rh_df.merge(test, how='left', left_index=True, right_index=True)
dar_df = dar_df.merge(test, how='left', left_index=True, right_index=True)
mac_df.head()
mac_df = mac_df.melt(id_vars=['user id'], var_name='item id', value_name='raw_score')
win_df = win_df.melt(id_vars=['user id'], var_name='item id', value_name='raw_score')
sgm_df = sgm_df.melt(id_vars=['user id'], var_name='item id', value_name='raw_score')
rh_df = rh_df.melt(id_vars=['user id'], var_name='item id', value_name='raw_score')
dar_df = dar_df.melt(id_vars=['user id'], var_name='item id', value_name='raw_score')
test = users[users['set']=='test']
test = test.merge(responses, how='left', on='user id')
test = test[['user id', 'item id', 'rated']]
test.head()
mac_df = mac_df.merge(test, how='left', on=['user id', 'item id'])
win_df = win_df.merge(test, how='left', on=['user id', 'item id'])
sgm_df = sgm_df.merge(test, how='left', on=['user id', 'item id'])
rh_df = rh_df.merge(test, how='left', on=['user id', 'item id'])
dar_df = dar_df.merge(test, how='left', on=['user id', 'item id'])
mac_df.head()
mac_df['probability'] = mac_df.apply(lambda x: 1/(1 + np.exp(-x['raw_score'])), axis=1)
win_df['probability'] = win_df.apply(lambda x: 1/(1 + np.exp(-x['raw_score'])), axis=1)
sgm_df['probability'] = sgm_df.apply(lambda x: 1/(1 + np.exp(-x['raw_score'])), axis=1)
rh_df['probability'] = rh_df.apply(lambda x: 1/(1 + np.exp(-x['raw_score'])), axis=1)
dar_df['probability'] = dar_df.apply(lambda x: 1/(1 + np.exp(-x['raw_score'])), axis=1)
mac_df['pred'] = mac_df['probability'] >= 0.5
win_df['pred'] = win_df['probability'] >= 0.5
sgm_df['pred'] = sgm_df['probability'] >= 0.5
rh_df['pred'] = rh_df['probability'] >= 0.5
dar_df['pred'] = dar_df['probability'] >= 0.5
scores = [('Mac', mac_df), ('Windows', win_df), ('SageMaker', sgm_df), ('Red Hat', rh_df), ('Darwin', dar_df)]
for name, df in scores:
acc = BinaryClassificationMetrics.Accuracy().get_score(df['pred'].tolist(), df['rated'].tolist())
f1 = BinaryClassificationMetrics.F1().get_score(df['pred'].tolist(), df['rated'].tolist())
precision = BinaryClassificationMetrics.Precision().get_score(df['pred'].tolist(), df['rated'].tolist())
recall = BinaryClassificationMetrics.Recall().get_score(df['pred'].tolist(), df['rated'].tolist())
auc = BinaryClassificationMetrics.AUC().get_score(df['pred'].tolist(), df['rated'].tolist())
print(name, ':', acc, f1, precision, recall, auc)
###Output
Mac : 0.5008823418192208 0.12434072070146213 0.5297572312427373 0.07043653279215627 0.5035666226160606
Windows : 0.5008823418192208 0.12434072070146213 0.5297572312427373 0.07043653279215627 0.5035666226160606
SageMaker : 0.5008823418192208 0.12434072070146213 0.5297572312427373 0.07043653279215627 0.5035666226160606
Red Hat : 0.5008823418192208 0.12434072070146213 0.5297572312427373 0.07043653279215627 0.5035666226160606
Darwin : 0.5008823418192208 0.12434072070146213 0.5297572312427373 0.07043653279215627 0.5035666226160606
###Markdown
Table 3, Cholesky Row, NDCG, Precision, Recall Columns
###Code
for name, df in scores:
actual = df[['user id', 'item id', 'rated']].drop_duplicates(subset=['user id', 'item id'])
predicted = df[['user id', 'item id', 'probability']].drop_duplicates(subset=['user id', 'item id'])
predicted.columns = actual.columns
user_id_column = 'user id'
item_id_column = 'item id'
click_column = 'rated'
ctr = BinaryRecoMetrics.CTR(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=5).get_score(actual, predicted)
ctr5 = BinaryRecoMetrics.CTR(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=10).get_score(actual, predicted)
ctr10 = BinaryRecoMetrics.CTR(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=25).get_score(actual, predicted)
ndcg = RankingRecoMetrics.NDCG(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=5).get_score(actual, predicted)
ndcg5 = RankingRecoMetrics.NDCG(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=10).get_score(actual, predicted)
ndcg10 = RankingRecoMetrics.NDCG(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=25).get_score(actual, predicted)
precision = RankingRecoMetrics.Precision(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=5).get_score(actual, predicted)
precision5 = RankingRecoMetrics.Precision(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=10).get_score(actual, predicted)
precision10 = RankingRecoMetrics.Precision(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=25).get_score(actual, predicted)
recall = RankingRecoMetrics.Recall(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=5).get_score(actual, predicted)
recall5 = RankingRecoMetrics.Recall(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=10).get_score(actual, predicted)
recall10 = RankingRecoMetrics.Recall(user_id_column=user_id_column, item_id_column=item_id_column,
click_column=click_column, k=25).get_score(actual, predicted)
print(name, ' CTR:', ctr, ctr5, ctr10)
print(name, ' NDCG:', ndcg, ndcg5, ndcg10)
print(name, ' Precision:', precision, precision5, precision10)
print(name, ' Recall:', recall, recall5, recall10)
results = []
env = ['Mac', 'Windows', 'SageMaker', 'Red Hat', 'Darwin']
for name1, df1 in scores:
for name2, df2 in scores:
if name1 != name2:
sim1 = np.isclose(df1['raw_score'], df2['raw_score']).sum()
sim2 = np.isclose(df1['probability'], df2['probability']).sum()
sim3 = np.isclose(df1['pred'], df2['pred']).sum()
results.append((name1, name2, sim1, sim2, sim3))
results
###Output
_____no_output_____
|
Rice type classification/Model/rice_type_classification.ipynb
|
###Markdown
Importing the Dataset
###Code
df = pd.read_csv("riceClassification.csv")
df.head()
df.describe()
df.info()
df.isna().sum(axis=0)
df.columns
###Output
_____no_output_____
###Markdown
Dropping unnecessary columns
###Code
df.drop(columns=['id'],inplace=True)
df.head()
df.corr()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis
###Code
plt.figure(figsize=(3,3),dpi=150)
plt.style.use('dark_background')
sns.countplot(x='Class', data = df)
plt.xlabel('Target classes')
plt.ylabel('count of each class')
plt.title('Class distribution')
#plt.savefig("/Users/debjitpal/Documents/GitHub/ML-Crate/Rice type classification/Images/Class_distribution.png",bbox_inches = 'tight')
plt.figure(figsize=(10, 10))
heatmap = sns.heatmap(df.corr(), vmin= -1, vmax = 1, annot=True)
heatmap.set_title('Correlation Heatmap', fontdict={'fontsize':12})
#plt.savefig("/Users/debjitpal/Documents/GitHub/ML-Crate/Rice type classification/Images/Correlation_heatmap.png",bbox_inches = 'tight')
###Output
_____no_output_____
###Markdown
Partitioning the dataset into training and test sets
###Code
X=df.iloc[:,:-1]
y=df.iloc[:,-1]
print("//Independent features//")
print(X.head())
print("\n\n//Dependent feature//")
print(y.head())
###Output
//Independent features//
Area MajorAxisLength MinorAxisLength ... Perimeter Roundness AspectRation
0 4537 92.229316 64.012769 ... 273.085 0.764510 1.440796
1 2872 74.691881 51.400454 ... 208.317 0.831658 1.453137
2 3048 76.293164 52.043491 ... 210.012 0.868434 1.465950
3 3073 77.033628 51.928487 ... 210.657 0.870203 1.483456
4 3693 85.124785 56.374021 ... 230.332 0.874743 1.510000
[5 rows x 10 columns]
//Dependent feature//
0 1
1 1
2 1
3 1
4 1
Name: Class, dtype: int64
###Markdown
Train Test Split
###Code
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3,random_state=0)
###Output
_____no_output_____
###Markdown
Feature Scaling
###Code
scaler=StandardScaler()
X_train=scaler.fit_transform(X_train)
X_test=scaler.transform(X_test)
# Logistic Regression
lr=LogisticRegression()
lr_mdl=lr.fit(X_train,y_train)
lr_pred=lr.predict(X_test)
lr_con_matrix=confusion_matrix(y_test,lr_pred)
lr_acc=accuracy_score(y_test,lr_pred)
print("Confusion Matrix",'\n',lr_con_matrix)
print('\n')
print("Accuracy of Logistic Regression: ",lr_acc*100,'\n')
print(classification_report(y_test,lr_pred))
#Random Forest Classfier
rf = RandomForestClassifier()
rf.fit(X_train,y_train)
rf_pred = rf.predict(X_test)
rf_con_matrix = confusion_matrix(y_test, rf_pred)
rf_acc = accuracy_score(y_test, rf_pred)
print("Confusion Matrix\n",rf_con_matrix)
print("\n")
print("Accuracy of Random Forest:",rf_acc*100,'\n')
print(classification_report(y_test,rf_pred))
#DecisionTreeClassifier
dt = DecisionTreeClassifier()
dt.fit(X_train, y_train)
dt_pred = dt.predict(X_test)
dt_con_matrix = confusion_matrix(y_test, dt_pred)
dt_acc = accuracy_score(y_test, dt_pred)
print("Confusion Matrix\n",dt_con_matrix)
print("\n")
print("Accuracy of Decision Tree Classifier:",dt_acc*100,'\n')
print(classification_report(y_test,dt_pred))
y_score1 = lr.predict_proba(X_test)[:,1]
y_score2 = rf.predict_proba(X_test)[:,1]
y_score3 = dt.predict_proba(X_test)[:,1]
false_positive_rate1, true_positive_rate1, threshold1 = roc_curve(y_test, y_score1)
false_positive_rate2, true_positive_rate2, threshold2 = roc_curve(y_test, y_score2)
false_positive_rate3, true_positive_rate3, threshold3 = roc_curve(y_test, y_score3)
plt.figure(figsize=(5,5),dpi=150)
plt.title('Receiver Operating Characteristic (ROC) Curve')
plt.plot(false_positive_rate1,true_positive_rate1, color='red', label = "Logistic Regression")
plt.plot(false_positive_rate2,true_positive_rate2, color='blue', label = "Random Forest")
plt.plot(false_positive_rate3,true_positive_rate3, color='green', label = "Decision Tree")
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],linestyle='--')
plt.axis('tight')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
#plt.savefig("/Users/debjitpal/Documents/GitHub/ML-Crate/Rice type classification/Images/ROC_curve.png",bbox_inches = 'tight')
mdl_evl = pd.DataFrame({'Model': ['Logistic Regression','Random Forest', 'Decision Tree'], 'Accuracy': [lr_acc*100,rf_acc*100,dt_acc*100]})
mdl_evl
pal=['red','blue','green']
fig, ax = plt.subplots(figsize=(20,10))
sns.barplot(x="Model",y="Accuracy",palette=pal,data=mdl_evl)
plt.title('Model Accuracy')
plt.xlabel('Model')
plt.ylabel('Accuracy')
#plt.savefig("/Users/debjitpal/Documents/GitHub/ML-Crate/Rice type classification/Images/Model_accuracy.png",bbox_inches = 'tight')
###Output
_____no_output_____
|
notebooks/Basic/Guillaume_Time_Frequency.ipynb
|
###Markdown
Data resampling Resample the series and visualise at different scales
###Code
# Hourly
hour_temp = fln_df.resample('H').mean()
hour_temp['glo_avg'].plot()
plt.grid()
# Daily
day_temp = fln_df.resample('D').mean()
day_temp['glo_avg'].plot()
plt.grid()
###Output
_____no_output_____
###Markdown
Explore autocorrelation of the time series (at monthly scale) explore the autocorrelation of global radiance data
###Code
rad = np.array(hour_temp['glo_avg'])
# detrend the seasonal data by removing the average
det_rad = rad - np.average(rad)
from statsmodels.tsa.stattools import acf
from statsmodels.tsa.stattools import acovf
acv_rad = acovf(det_rad)
acf_rad = acf(det_rad)
acv_rad
###Output
_____no_output_____
###Markdown
Do the regression analysis
###Code
# Get the fourier coefficients
#rad = np.array(hour_temp['glo_avg'])
det_rad_fft = fft(det_rad)
# Get the power spectrum
rad_ps = [np.abs(rd)**2 for rd in det_rad_fft]
#plt.subplot(2,1,2)
plt.plot(range(len(rad_ps)), rad_ps)
plt.xlabel('Frequency')
plt.ylabel('Power spectrum')
#plt.xlim([0, 30])
plt.grid()
plt.show()
greatest = [j for i, j in enumerate(rad_ps) if j > 0.1e12]
(sum(greatest) / sum(rad_ps)) * 100
rad_ps
# # Filter frequencies in the low part of the power spectrum and re-construct the series
#
# A filter in the value of 500 of the power spectrum was set. In other words, if the value
# of the power spectrum is below this threshold, it will be set to 0. this will allow to focus
# on the signal of the data, instead that in the fluctuations that comes from the randomness of
# the process and the measurements.
#
# The selection of the 500 threshold was arbitrary and of course is open for debate.
## Clean each of the time series in the seasons by selecting such that the power spectrum is higher than 500
clean_rad_fft = [det_rad_fft[i] if rad_ps[i] > 0.1e12 else 0
for i in range(len(det_rad_fft))]
clean_rad_ps = [rad_ps[i] if rad_ps[i] > 0.1e12 else 0
for i in range(len(rad_ps))]
plt.figure(figsize=[12,9])
plt.subplot(3,1,1)
plt.plot(np.transpose(clean_rad_ps))
#plt.xlim([0, 30])
plt.grid()
## redraw the series only with significant harmonics
rad_series_clean = ifft(clean_rad_fft)
plt.plot(rad_series_clean[0:100])
plt.plot(rad[0:100])
plt.legend(bbox_to_anchor=(1.18, 1.04))
plt.grid()
## put the trend back into the dataset
rad_trends = rad_series_clean + np.average(rad)
plt.plot(rad_trends[0:100])
plt.plot(rad[0:100])
plt.grid()
plt.show()
del(rad_clean_ts)
rad_clean_ts = pd.Series(rad_trends, index=hour_temp.index)
rad_clean_ts[(rad_clean_ts.index.hour < 6) | (rad_clean_ts.index.hour > 20)] = 0
plt.plot(rad_clean_ts[0:100].values)
plt.plot(rad[0:100])
plt.grid()
plt.show()
###Output
/Users/cseveriano/anaconda3/lib/python3.6/site-packages/numpy/core/numeric.py:531: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
###Markdown
Correlation Test
###Code
fln_df = fln_df[(fln_df.index >= '2013-11-01') & (fln_df.index <= '2014-11-01')]
fln_df.info()
joi_df = pd.read_csv('data/processed/SONDA/JOI-15min.csv', sep=";", parse_dates=['date'], index_col='date')
# Fill the gaps in the series
joi_df = joi_df.fillna(method='ffill')
joi_df = joi_df[(fln_df.index >= '2013-11-01') & (joi_df.index <= '2014-11-01')]
###Output
_____no_output_____
|
development/only_fa_prior_fitting_development.ipynb
|
###Markdown
A notebook to verify we learn the correct priors when fitting only the priors
###Code
%load_ext autoreload
%autoreload 2
import matplotlib.pyplot as plt
import numpy as np
import torch
from janelia_core.math.basic_functions import optimal_orthonormal_transform
from janelia_core.ml.utils import list_torch_devices
from probabilistic_model_synthesis.fa import FAMdl
from probabilistic_model_synthesis.fa import Fitter
from probabilistic_model_synthesis.fa import generate_simple_prior_collection
from probabilistic_model_synthesis.fa import generate_basic_posteriors
from probabilistic_model_synthesis.fa import VICollection
from probabilistic_model_synthesis.math import MeanFcnTransformer
from probabilistic_model_synthesis.math import StdFcnTransformer
from probabilistic_model_synthesis.visualization import plot_torch_dist
###Output
_____no_output_____
###Markdown
Parameters go here
###Code
# Number of individuals we simulate observing data from
n_individuals = 5
# Range of the number of variables we observe from each individual - the actual number of variables we observe from an
# individual will be pulled uniformly from this range (inclusive)
n_var_range = [1000, 1200]
# Range of the number of samples we observe from each individual - the actual number we observe from each individual
# will be unformly from this range (inclusive)
n_smps_range = [1000, 1500]
# Number of latent variables in the model
n_latent_vars = 2
# True if we should use GPUs for fitting if they are available
use_gpus = True
###Output
_____no_output_____
###Markdown
Create the true prior distributions that relate parameters in the model to variable (e.g., neuron) properties
###Code
true_priors = generate_simple_prior_collection(n_prop_vars=2, n_latent_vars=n_latent_vars,
lm_mn_w_init_std=1.0, lm_std_w_init_std=1.0,
mn_mn_w_init_std=1.0, mn_std_w_init_std=1.0,
psi_conc_f_w_init_std=2.0, psi_rate_f_w_init_std=1.0,
psi_conc_bias_mn=10.0, psi_rate_bias_mn=3.0)
###Output
_____no_output_____
###Markdown
Generate properties
###Code
ind_n_vars = np.random.randint(n_var_range[0], n_var_range[1]+1, n_individuals)
ind_n_smps = np.random.randint(n_smps_range[0], n_smps_range[1]+1, n_individuals)
ind_props = [torch.rand(size=[n_vars,2]) for n_vars in ind_n_vars]
###Output
_____no_output_____
###Markdown
Generate true FA models
###Code
with torch.no_grad():
ind_true_fa_mdls = [FAMdl(lm=true_priors.lm_prior.sample(props), mn=true_priors.mn_prior.sample(props).squeeze(),
psi=(true_priors.psi_prior.sample(props).squeeze()))
for props in ind_props]
###Output
_____no_output_____
###Markdown
Generate data from each model
###Code
with torch.no_grad():
ind_data = [mdl.sample(n_smps) for n_smps, mdl in zip(ind_n_smps, ind_true_fa_mdls)]
###Output
_____no_output_____
###Markdown
Fit new models together
###Code
#fit_priors = generate_simple_prior_collection(n_prop_vars=2, n_latent_vars=n_latent_vars)
fit_priors = generate_simple_prior_collection(n_prop_vars=2, n_latent_vars=n_latent_vars,
lm_mn_w_init_std=1.0, lm_std_w_init_std=1.0,
mn_mn_w_init_std=1.0, mn_std_w_init_std=1.0,
psi_conc_f_w_init_std=2.0, psi_rate_f_w_init_std=1.0,
psi_conc_bias_mn=10.0, psi_rate_bias_mn=3.0,
min_gaussian_std=.0001)
fit_posteriors = generate_basic_posteriors(n_obs_vars=ind_n_vars, n_smps=ind_n_smps, n_latent_vars=n_latent_vars)
fit_mdls = [FAMdl(lm=None, mn=None, psi=None) for i in range(n_individuals)]
vi_collections = [VICollection(data=data_i[1], props=props_i, mdl=mdl_i, posteriors=posteriors_i)
for data_i, props_i,mdl_i, posteriors_i in zip(ind_data, ind_props, fit_mdls, fit_posteriors)]
for vi_coll in vi_collections:
vi_coll.posteriors.lm_post = fit_priors.lm_prior
vi_coll.posteriors.mn_post = fit_priors.mn_prior
vi_coll.posteriors.psi_post = fit_priors.psi_prior
if use_gpus:
devices, _ = list_torch_devices()
else:
devices = [torch.device('cpu')]
fitter = Fitter(vi_collections=vi_collections, priors=fit_priors)
fitter.distribute(distribute_data=True, devices=devices)
log = fitter.fit(1000, milestones=[300, 500, 700, 1200], update_int=100, init_lr=.1,
skip_lm_kl=True, skip_mn_kl=True, skip_psi_kl=True)
fitter.distribute(devices=[torch.device('cpu')])
###Output
=========== EPOCH 0 COMPLETE ===========
Obj: 4.10e+07
----------------------------------------
NELL: 7.67e+06, 7.07e+06, 9.55e+06, 8.08e+06, 8.67e+06
Latent KL: 1.06e+01, 2.27e+01, 1.25e+01, 9.35e+00, 2.83e+01
LM KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Mn KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Psi KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
----------------------------------------
LR: 0.1
Elapsed time (secs): 0.08827900886535645
----------------------------------------
CPU cur memory used (GB): 4.07e+00
GPU_0 cur memory used (GB): 1.49e-02, max memory used (GB): 1.49e-02
GPU_1 cur memory used (GB): 8.62e-03, max memory used (GB): 8.62e-03
=========== EPOCH 100 COMPLETE ===========
Obj: 1.59e+07
----------------------------------------
NELL: 2.99e+06, 2.79e+06, 3.69e+06, 3.06e+06, 3.36e+06
Latent KL: 4.07e+03, 3.90e+03, 5.00e+03, 3.95e+03, 4.50e+03
LM KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Mn KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Psi KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
----------------------------------------
LR: 0.1
Elapsed time (secs): 8.90388011932373
----------------------------------------
CPU cur memory used (GB): 4.07e+00
GPU_0 cur memory used (GB): 1.49e-02, max memory used (GB): 1.49e-02
GPU_1 cur memory used (GB): 8.62e-03, max memory used (GB): 8.62e-03
=========== EPOCH 200 COMPLETE ===========
Obj: 1.59e+07
----------------------------------------
NELL: 2.98e+06, 2.79e+06, 3.68e+06, 3.06e+06, 3.36e+06
Latent KL: 3.96e+03, 3.76e+03, 4.94e+03, 3.88e+03, 4.77e+03
LM KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Mn KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Psi KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
----------------------------------------
LR: 0.1
Elapsed time (secs): 17.799702405929565
----------------------------------------
CPU cur memory used (GB): 4.07e+00
GPU_0 cur memory used (GB): 1.49e-02, max memory used (GB): 1.49e-02
GPU_1 cur memory used (GB): 8.62e-03, max memory used (GB): 8.62e-03
=========== EPOCH 300 COMPLETE ===========
Obj: 1.59e+07
----------------------------------------
NELL: 2.98e+06, 2.79e+06, 3.68e+06, 3.05e+06, 3.35e+06
Latent KL: 3.90e+03, 3.77e+03, 5.00e+03, 3.80e+03, 4.78e+03
LM KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Mn KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Psi KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
----------------------------------------
LR: 0.010000000000000002
Elapsed time (secs): 26.629688024520874
----------------------------------------
CPU cur memory used (GB): 4.07e+00
GPU_0 cur memory used (GB): 1.49e-02, max memory used (GB): 1.49e-02
GPU_1 cur memory used (GB): 8.62e-03, max memory used (GB): 8.62e-03
=========== EPOCH 400 COMPLETE ===========
Obj: 1.59e+07
----------------------------------------
NELL: 2.98e+06, 2.78e+06, 3.68e+06, 3.05e+06, 3.36e+06
Latent KL: 3.90e+03, 3.79e+03, 4.91e+03, 3.83e+03, 4.76e+03
LM KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Mn KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Psi KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
----------------------------------------
LR: 0.010000000000000002
Elapsed time (secs): 35.37087655067444
----------------------------------------
CPU cur memory used (GB): 4.07e+00
GPU_0 cur memory used (GB): 1.49e-02, max memory used (GB): 1.49e-02
GPU_1 cur memory used (GB): 8.62e-03, max memory used (GB): 8.62e-03
=========== EPOCH 500 COMPLETE ===========
Obj: 1.59e+07
----------------------------------------
NELL: 2.98e+06, 2.78e+06, 3.68e+06, 3.05e+06, 3.35e+06
Latent KL: 3.90e+03, 3.77e+03, 4.94e+03, 3.83e+03, 4.76e+03
LM KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Mn KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Psi KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
----------------------------------------
LR: 0.0010000000000000002
Elapsed time (secs): 44.06828236579895
----------------------------------------
CPU cur memory used (GB): 4.07e+00
GPU_0 cur memory used (GB): 1.49e-02, max memory used (GB): 1.49e-02
GPU_1 cur memory used (GB): 8.62e-03, max memory used (GB): 8.62e-03
=========== EPOCH 600 COMPLETE ===========
Obj: 1.59e+07
----------------------------------------
NELL: 2.98e+06, 2.78e+06, 3.68e+06, 3.05e+06, 3.36e+06
Latent KL: 3.90e+03, 3.77e+03, 4.93e+03, 3.84e+03, 4.76e+03
LM KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Mn KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Psi KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
----------------------------------------
LR: 0.0010000000000000002
Elapsed time (secs): 52.77782964706421
----------------------------------------
CPU cur memory used (GB): 4.07e+00
GPU_0 cur memory used (GB): 1.49e-02, max memory used (GB): 1.49e-02
GPU_1 cur memory used (GB): 8.62e-03, max memory used (GB): 8.62e-03
=========== EPOCH 700 COMPLETE ===========
Obj: 1.59e+07
----------------------------------------
NELL: 2.98e+06, 2.79e+06, 3.68e+06, 3.05e+06, 3.36e+06
Latent KL: 3.90e+03, 3.78e+03, 4.92e+03, 3.83e+03, 4.76e+03
LM KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Mn KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Psi KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
----------------------------------------
LR: 0.00010000000000000003
Elapsed time (secs): 61.50692391395569
----------------------------------------
CPU cur memory used (GB): 4.07e+00
GPU_0 cur memory used (GB): 1.49e-02, max memory used (GB): 1.49e-02
GPU_1 cur memory used (GB): 8.62e-03, max memory used (GB): 8.62e-03
=========== EPOCH 800 COMPLETE ===========
Obj: 1.59e+07
----------------------------------------
NELL: 2.97e+06, 2.78e+06, 3.68e+06, 3.05e+06, 3.36e+06
Latent KL: 3.90e+03, 3.78e+03, 4.92e+03, 3.84e+03, 4.76e+03
LM KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Mn KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Psi KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
----------------------------------------
LR: 0.00010000000000000003
Elapsed time (secs): 70.17130923271179
----------------------------------------
CPU cur memory used (GB): 4.07e+00
GPU_0 cur memory used (GB): 1.49e-02, max memory used (GB): 1.49e-02
GPU_1 cur memory used (GB): 8.62e-03, max memory used (GB): 8.62e-03
=========== EPOCH 900 COMPLETE ===========
Obj: 1.59e+07
----------------------------------------
NELL: 2.98e+06, 2.78e+06, 3.68e+06, 3.05e+06, 3.35e+06
Latent KL: 3.90e+03, 3.78e+03, 4.93e+03, 3.84e+03, 4.76e+03
LM KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Mn KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
Psi KL: 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00
----------------------------------------
LR: 0.00010000000000000003
Elapsed time (secs): 78.84270524978638
----------------------------------------
CPU cur memory used (GB): 4.07e+00
GPU_0 cur memory used (GB): 1.49e-02, max memory used (GB): 1.49e-02
GPU_1 cur memory used (GB): 8.62e-03, max memory used (GB): 8.62e-03
###Markdown
Examine logs of fitting performance
###Code
fitter.plot_log(log)
###Output
[True, True, True, True, True, True]
###Markdown
Look at model fits
###Code
exam_mdl = 0
fit_lm = vi_collections[exam_mdl].posteriors.lm_post(ind_props[exam_mdl]).detach().squeeze()
fit_mn = vi_collections[exam_mdl].posteriors.mn_post(ind_props[exam_mdl]).detach().squeeze()
fit_psi = vi_collections[exam_mdl].posteriors.psi_post.mode(ind_props[exam_mdl]).detach().squeeze()
cmp_mdl = FAMdl(lm=fit_lm, mn=fit_mn, psi=fit_psi)
true_mdl = ind_true_fa_mdls[exam_mdl]
plt.figure()
true_mdl.compare_models(true_mdl, cmp_mdl)
###Output
(1030, 2)
###Markdown
Visualize paraemters of the true prior distributions over the loading matrices
###Code
for d in range(n_latent_vars):
plt.figure(figsize=(9,3))
plot_torch_dist(mn_f=true_priors.lm_prior.mn_f, std_f=true_priors.lm_prior.std_f, vis_dim=d,
extra_title_str = ', d=' + str(d))
###Output
_____no_output_____
###Markdown
Visualize paraemters of the fit prior distributions over the loading matrices
###Code
rnd_vls = torch.rand(1000,2)
o = optimal_orthonormal_transform(true_priors.lm_prior(rnd_vls).detach().numpy(),
fit_priors.lm_prior(rnd_vls).detach().numpy())
fit_lm_mn_fcn = MeanFcnTransformer(o=o.transpose(), f=fit_priors.lm_prior.mn_f)
fit_lm_std_fcn = StdFcnTransformer(o=o.transpose(), f=fit_priors.lm_prior.std_f)
for d in range(n_latent_vars):
plt.figure(figsize=(9,3))
plot_torch_dist(mn_f=fit_lm_mn_fcn, std_f=fit_lm_std_fcn, vis_dim=d,
extra_title_str = ', d=' + str(d))
###Output
torch.Size([1000000, 2])
torch.Size([1000000, 2])
###Markdown
Visualize parameters of the true prior distribution over the means
###Code
plt.figure(figsize=(9,3))
plot_torch_dist(mn_f=true_priors.mn_prior.mn_f, std_f=true_priors.mn_prior.std_f)
###Output
_____no_output_____
###Markdown
Visualize parameters of the fit prior distribution over the means
###Code
plt.figure(figsize=(9,3))
plot_torch_dist(mn_f=fit_priors.mn_prior.mn_f, std_f=fit_priors.mn_prior.std_f)
###Output
_____no_output_____
###Markdown
Visualize parameters of the true prior distribution over private variances
###Code
plt.figure(figsize=(9,3))
plot_torch_dist(mn_f=true_priors.psi_prior.forward, std_f=true_priors.psi_prior.std)
###Output
_____no_output_____
###Markdown
Visualize parameters of the fit prior distribution over private variances
###Code
plt.figure(figsize=(9,3))
plot_torch_dist(mn_f=fit_priors.psi_prior.forward, std_f=fit_priors.psi_prior.std)
###Output
_____no_output_____
###Markdown
Visualize latent estimates for an example model
###Code
ex_s = 0
# Learn transformation to put estimated latents into same space as true latents
with torch.no_grad():
true_lm = ind_true_fa_mdls[ex_s].lm.numpy()
est_lm = fit_posteriors[ex_s].lm_post(ind_props[ex_s]).numpy()
o = optimal_orthonormal_transform(true_lm, est_lm)
# Get estimated latents in the right space
est_latents = np.matmul(fit_posteriors[ex_s].latent_post.mns.detach().numpy(), o)
# Visualize latents
true_latents = ind_data[ex_s][0].numpy()
plt.figure()
for l_i in range(n_latent_vars):
ax = plt.subplot(n_latent_vars, 1, l_i+1)
plt.plot(true_latents[:, l_i], 'bo')
plt.plot(est_latents[:, l_i], 'r.')
###Output
_____no_output_____
|
docs/examples/neaten_merge_tools.ipynb
|
###Markdown
Neaten Merge Tools
###Code
from holoext.xbokeh import Mod
import holoviews as hv
import numpy as np
import warnings
warnings.filterwarnings('ignore') # bokeh deprecation warnings
hv.extension('bokeh')
# http://holoviews.org/reference/containers/bokeh/HoloMap.html
frequencies = [0.5, 0.75, 1.0, 1.25]
def sine_curve(phase, freq):
xvals = [0.1 * i for i in range(100)]
return hv.Curve((xvals, [np.sin(phase + freq * x) for x in xvals]), kdims='x_axis', vdims='y_axis')
curve_dict = {f: sine_curve(0, f) for f in frequencies}
hmap = hv.HoloMap(curve_dict, kdims=('frequency'))
hmap_grid = hv.GridMatrix(hmap)
###Output
_____no_output_____
###Markdown
Disable neaten_io (which replaces underscores with spaces)
###Code
Mod(neaten_io=False).apply(hmap)
###Output
_____no_output_____
###Markdown
Enable merge_tools
###Code
Mod(merge_tools=True).apply(hmap_grid)
###Output
_____no_output_____
|
notebooks/monte_carlo_dev/get_oil_type_tanker.ipynb
|
###Markdown
Decision tree for allocating oil type to tanker trafficsee google drawing (Tanker_Oil_Attribution)[https://docs.google.com/drawings/d/1-4gl2yNNWxqXK-IOr4KNZxO-awBC-bNrjRNrt86fykU/edit ] for a visual representation
###Code
# These will become function inputs
origin = 'Westridge Marine Terminal'
destination = 'Pacific'
ship_type = 'tanker'
random_seed=None
import numpy
import yaml
import pathlib
# load the list of CAD, US and generic origins and destinations
master_dir = '/Users/rmueller/Projects/MIDOSS/analysis-rachael/notebooks/monte_carlo/'
master_file = 'master.yaml'
with open(f'{master_dir}{master_file}') as file:
master = yaml.safe_load(file)
# Assign US and CAD origin/destinations from master file
CAD_origin_destination = master['categories']['CAD_origin_destination']
US_origin_destination = master['categories']['US_origin_destination']
# Get file paths to fuel-type yaml files
home = pathlib.Path(master['directories'])
CAD_yaml = home/master['files']['cargo_CAD']
WA_in_yaml = home/master['files']['cargo_WA_in']
WA_out_yaml = home/master['files']['cargo_WA_out']
US_yaml = home/master['files']['cargo_US']
Pacific_yaml = home/master['files']['cargo_Pacific']
###Output
_____no_output_____
###Markdown
Create decision-making tree for tanker trafficAll attributions will be tank-cargo at this point. I don't think we have decision-making around spill type (fuel or cargo) yet (need to check!). Will likely add here
###Code
def get_fuel_type(yaml_file, facility):
with yaml_file.open("rt") as file:
cargo = yaml.safe_load(file)
tanker = cargo[facility][ship_type]
probability = [tanker[fuel]['fraction_of_total'] for fuel in tanker]
fuel_type = random_generator.choice(list(tanker.keys()), p = probability)
return fuel_type
# these pairs need to be used together for "get_fuel_type" (but don't yet have error-checks in place):
# - "WA_in_yaml" and "destination"
# - "WA_out_yaml" and "origin"
# Need to add a catch for erroneous cases where origin-destination in AIS analysis
# pairs vessel-type and facility to null values in the DOE transfer data.
# For these cases (which shouldn't happen but might), we will use the generic US fuel allocations
# Initialize PCG-64 random number generator
random_generator = numpy.random.default_rng(random_seed)
if origin in CAD_origin_destination:
if origin == 'Westridge Marine Terminal':
fuel_type = get_fuel_type(CAD_yaml, origin)
else:
if destination in US_origin_destination:
# we have better information on WA fuel transfers, so I'm prioritizing this information source
fuel_type = get_fuel_type(WA_in_yaml, destination)
else:
fuel_type = get_fuel_type(CAD_yaml, origin)
elif origin in US_origin_destination:
fuel_type = get_fuel_type(WA_out_yaml, origin)
elif destination in US_origin_destination:
fuel_type = get_fuel_type(WA_in_yaml, destination)
elif destination in CAD_origin_destination:
fuel_type = get_fuel_type(CAD_yaml, destination)
elif origin == 'US':
fuel_type = get_fuel_type(US_yaml, origin)
elif origin == 'Canada':
fuel_type = get_fuel_type(US_yaml, origin)
else:
# this is the error-check allocation for the (hopefully no) cases in which a ship track
# wasn't allocated either origin or destination
fuel_type = random_generator.choice(['diesel','akns'], p = [.5, .5])
###Output
_____no_output_____
|
python_extras/functions-objects.ipynb
|
###Markdown
Functions as Objects Functions in Python are **first-class objects**. Programming language theorists define a **first-class object** as a program entity that can be:- Created at runtime- Assigned to a variable or element in a data structure- Passed as an argument to a function- Returned as the result of a function Integers, strings, and dictionaries are other examples of first-class objects in Python — nothing fancy here. But if you came to Python from a language where functions are **not** first-class citizens, this notbook and the rest focuses on the implications and practical applications of treating functions as objects. Treating a Function like an Object
###Code
def factorial(n):
'''returns n!'''
return 1 if n < 2 else n * factorial(n-1)
factorial(42)
factorial.__doc__
type(factorial)
###Output
_____no_output_____
###Markdown
Introspection
###Code
dir(factorial)
factorial.__class__
###Output
_____no_output_____
###Markdown
Use function through a different name, and pass function as argument
###Code
fact = factorial
fact
fact(5)
###Output
_____no_output_____
###Markdown
**Note**: Having first-class functions enables programming in a **functional style**. One of the hallmarks of functional programming is the use of **higher-order functions**. Higher-Order Functions >A function that takes a function as argument or returns a function as the result is a higher-order function. One example is `map`. Another is the built-in function `sorted`: an optional `key` argument lets you provide a function to be applied to each item for sorting, as seen in `list.sort` and the `sorted` functions.For example, to sort a list of words by length, simply pass the `len` function as the key:.
###Code
fruits = ['strawberry', 'fig', 'apple', 'cherry', 'raspberry', 'banana']
sorted(fruits, key=len)
###Output
_____no_output_____
###Markdown
Anonymous Functions The `lambda` keyword creates an anonymous function within a Python expression.However, the simple syntax of Python limits the body of `lambda` functions to be pure expressions. In other words, the body of a lambda cannot make assignments or use any other Python statement such as `while`, etc.The best use of anonymous functions is in the context of an argument list.
###Code
fruits = ['strawberry', 'fig', 'apple', 'cherry', 'raspberry', 'banana']
sorted(fruits, key=lambda word: word[::-1])
###Output
_____no_output_____
###Markdown
Outside the limited context of arguments to higher-order functions, anonymous functions are rarely useful in Python. The syntactic restrictions tend to make nontriv‐ ial lambdas either unreadable or unworkable. **Lundh’s lambda Refactoring Recipe**If you find a piece of code hard to understand because of a lambda, Fredrik Lundh suggests this refactoring procedure:> 1. Write a comment explaining what the heck that lambda does. > 2. Study the comment for a while, and think of a name that captures the essence ofthe comment.> 3. Convert the lambda to a def statement, using that name.> 4. Remove the comment.These steps are quoted from the [Functional Programming HOWTO](https://docs.python.org/3/howto/functional.html), a **must read**. The `lambda` syntax is just syntactic sugar: a lambda expression creates a function object just like the def statement. Function Annotations **Python 3** provides syntax to attach _metadata_ to the parameters of a function declaration and its return value.
###Code
def clip(text:str, max_len:'int > 0'=80) -> str:
"""Return text clipped at the last space before or after max_len
"""
end = None
if len(text) > max_len:
space_before = text.rfind(' ', 0, max_len)
if space_before >= 0:
end = space_before
else:
space_after = text.rfind(' ', max_len)
if space_after >= 0:
end = space_after
if end is None: # no spaces were found
end = len(text)
return text[:end].rstrip()
clip.__annotations__
###Output
_____no_output_____
|
python/docs/source/examples/predict.ipynb
|
###Markdown
Predict comparison
###Code
import vowpalwabbit
def my_predict(vw, ex):
pp = 0.
for f,v in ex.iter_features():
pp += vw.get_weight(f) * v
return pp
def ensure_close(a,b,eps=1e-6):
if abs(a-b) > eps:
raise Exception("test failed: expected " + str(a) + " and " + str(b) + " to be " + str(eps) + "-close, but they differ by " + str(abs(a-b)))
###############################################################################
vw = vowpalwabbit.Workspace("--quiet")
###############################################################################
vw.learn("1 |x a b")
###############################################################################
print('# do some stuff with a read example:')
ex = vw.example("1 |x a b |y c")
ex.learn() ; ex.learn() ; ex.learn() ; ex.learn()
updated_pred = ex.get_updated_prediction()
print('current partial prediction =', updated_pred)
# compute our own prediction
print(' my view of example =', str([(f,v,vw.get_weight(f)) for f,v in ex.iter_features()]))
my_pred = my_predict(vw, ex)
print(' my partial prediction =', my_pred)
ensure_close(updated_pred, my_pred)
print('')
vw.finish_example(ex)
###############################################################################
print('# make our own example from scratch')
ex = vw.example()
ex.set_label_string("0")
ex.push_features('x', ['a', 'b'])
ex.push_features('y', [('c', 1.)])
ex.setup_example()
print(' my view of example =', str([(f,v,vw.get_weight(f)) for f,v in ex.iter_features()]))
my_pred2 = my_predict(vw, ex)
print(' my partial prediction =', my_pred2)
ensure_close(my_pred, my_pred2)
ex.learn() ; ex.learn() ; ex.learn() ; ex.learn()
print(' final partial prediction =', ex.get_updated_prediction())
ensure_close(ex.get_updated_prediction(), my_predict(vw,ex))
print('')
vw.finish_example(ex)
###############################################################################
exList = []
for i in range(120):
ex = vw.example()
exList.append(ex)
# this is the safe way to delete the examples for VW to reuse:
for ex in exList:
vw.finish_example(ex)
exList = [] # this should __del__ the examples, we hope :)
for i in range(120):
ex = vw.example()
exList.append(ex)
for ex in exList:
vw.finish_example(ex)
###############################################################################
for i in range(2):
ex = vw.example("1 foo| a b")
ex.learn()
print('tag =', ex.get_tag())
print('partial pred =', ex.get_partial_prediction())
print('loss =', ex.get_loss())
print('label =', ex.get_label())
vw.finish_example(ex)
# to be safe, finish explicity (should happen by default anyway)
vw.finish()
###############################################################################
print('# test some save/load behavior')
vw = vowpalwabbit.Workspace("--quiet -f test.model")
ex = vw.example("1 |x a b |y c")
ex.learn() ; ex.learn() ; ex.learn() ; ex.learn()
before_save = ex.get_updated_prediction()
print('before saving, prediction =', before_save)
vw.finish_example(ex)
vw.finish() # this should create the file
# now re-start vw by loading that model
vw = vowpalwabbit.Workspace("--quiet -i test.model")
ex = vw.example("1 |x a b |y c") # test example
ex.learn()
after_save = ex.get_partial_prediction()
print(' after saving, prediction =', after_save)
vw.finish_example(ex)
ensure_close(before_save, after_save)
vw.finish() # this should create the file
print('done!')
###Output
_____no_output_____
###Markdown
Predict comparison
###Code
import vowpalwabbit
def my_predict(vw, ex):
pp = 0.
for f,v in ex.iter_features():
pp += vw.get_weight(f) * v
return pp
def ensure_close(a,b,eps=1e-6):
if abs(a-b) > eps:
raise Exception("test failed: expected " + str(a) + " and " + str(b) + " to be " + str(eps) + "-close, but they differ by " + str(abs(a-b)))
###############################################################################
vw = vowpalwabbit.Workspace("--quiet")
###############################################################################
vw.learn("1 |x a b")
###############################################################################
print('# do some stuff with a read example:')
ex = vw.example("1 |x a b |y c")
ex.learn() ; ex.learn() ; ex.learn() ; ex.learn()
updated_pred = ex.get_updated_prediction()
print('current partial prediction =', updated_pred)
# compute our own prediction
print(' my view of example =', str([(f,v,vw.get_weight(f)) for f,v in ex.iter_features()]))
my_pred = my_predict(vw, ex)
print(' my partial prediction =', my_pred)
ensure_close(updated_pred, my_pred)
print('')
vw.finish_example(ex)
###############################################################################
print('# make our own example from scratch')
ex = vw.example()
ex.set_label_string("0")
ex.push_features('x', ['a', 'b'])
ex.push_features('y', [('c', 1.)])
ex.setup_example()
print(' my view of example =', str([(f,v,vw.get_weight(f)) for f,v in ex.iter_features()]))
my_pred2 = my_predict(vw, ex)
print(' my partial prediction =', my_pred2)
ensure_close(my_pred, my_pred2)
ex.learn() ; ex.learn() ; ex.learn() ; ex.learn()
print(' final partial prediction =', ex.get_updated_prediction())
ensure_close(ex.get_updated_prediction(), my_predict(vw,ex))
print('')
vw.finish_example(ex)
###############################################################################
exList = []
for i in range(120):
ex = vw.example()
exList.append(ex)
# this is the safe way to delete the examples for VW to reuse:
for ex in exList:
vw.finish_example(ex)
exList = [] # this should __del__ the examples, we hope :)
for i in range(120):
ex = vw.example()
exList.append(ex)
for ex in exList:
vw.finish_example(ex)
###############################################################################
for i in range(2):
ex = vw.example("1 foo| a b")
ex.learn()
print('tag =', ex.get_tag())
print('partial pred =', ex.get_partial_prediction())
print('loss =', ex.get_loss())
print('label =', ex.get_label())
vw.finish_example(ex)
# to be safe, finish explicity (should happen by default anyway)
vw.finish()
###############################################################################
print('# test some save/load behavior')
vw = vowpalwabbit.Workspace("--quiet -f test.model")
ex = vw.example("1 |x a b |y c")
ex.learn() ; ex.learn() ; ex.learn() ; ex.learn()
before_save = ex.get_updated_prediction()
print('before saving, prediction =', before_save)
vw.finish_example(ex)
vw.finish() # this should create the file
# now re-start vw by loading that model
vw = vowpalwabbit.Workspace("--quiet -i test.model")
ex = vw.example("1 |x a b |y c") # test example
ex.learn()
after_save = ex.get_partial_prediction()
print(' after saving, prediction =', after_save)
vw.finish_example(ex)
ensure_close(before_save, after_save)
vw.finish() # this should create the file
print('done!')
###Output
_____no_output_____
###Markdown
Predict comparison
###Code
from vowpalwabbit import pyvw
def my_predict(vw, ex):
pp = 0.
for f,v in ex.iter_features():
pp += vw.get_weight(f) * v
return pp
def ensure_close(a,b,eps=1e-6):
if abs(a-b) > eps:
raise Exception("test failed: expected " + str(a) + " and " + str(b) + " to be " + str(eps) + "-close, but they differ by " + str(abs(a-b)))
###############################################################################
vw = pyvw.vw("--quiet")
###############################################################################
vw.learn("1 |x a b")
###############################################################################
print('# do some stuff with a read example:')
ex = vw.example("1 |x a b |y c")
ex.learn() ; ex.learn() ; ex.learn() ; ex.learn()
updated_pred = ex.get_updated_prediction()
print('current partial prediction =', updated_pred)
# compute our own prediction
print(' my view of example =', str([(f,v,vw.get_weight(f)) for f,v in ex.iter_features()]))
my_pred = my_predict(vw, ex)
print(' my partial prediction =', my_pred)
ensure_close(updated_pred, my_pred)
print('')
vw.finish_example(ex)
###############################################################################
print('# make our own example from scratch')
ex = vw.example()
ex.set_label_string("0")
ex.push_features('x', ['a', 'b'])
ex.push_features('y', [('c', 1.)])
ex.setup_example()
print(' my view of example =', str([(f,v,vw.get_weight(f)) for f,v in ex.iter_features()]))
my_pred2 = my_predict(vw, ex)
print(' my partial prediction =', my_pred2)
ensure_close(my_pred, my_pred2)
ex.learn() ; ex.learn() ; ex.learn() ; ex.learn()
print(' final partial prediction =', ex.get_updated_prediction())
ensure_close(ex.get_updated_prediction(), my_predict(vw,ex))
print('')
vw.finish_example(ex)
###############################################################################
exList = []
for i in range(120): # note: if this is >=129, we hang!!!
ex = vw.example()
exList.append(ex)
# this is the safe way to delete the examples for VW to reuse:
for ex in exList:
vw.finish_example(ex)
exList = [] # this should __del__ the examples, we hope :)
for i in range(120): # note: if this is >=129, we hang!!!
ex = vw.example()
exList.append(ex)
for ex in exList:
vw.finish_example(ex)
###############################################################################
for i in range(2):
ex = vw.example("1 foo| a b")
ex.learn()
print('tag =', ex.get_tag())
print('partial pred =', ex.get_partial_prediction())
print('loss =', ex.get_loss())
print('label =', ex.get_label())
vw.finish_example(ex)
# to be safe, finish explicity (should happen by default anyway)
vw.finish()
###############################################################################
print('# test some save/load behavior')
vw = pyvw.vw("--quiet -f test.model")
ex = vw.example("1 |x a b |y c")
ex.learn() ; ex.learn() ; ex.learn() ; ex.learn()
before_save = ex.get_updated_prediction()
print('before saving, prediction =', before_save)
vw.finish_example(ex)
vw.finish() # this should create the file
# now re-start vw by loading that model
vw = pyvw.vw("--quiet -i test.model")
ex = vw.example("1 |x a b |y c") # test example
ex.learn()
after_save = ex.get_partial_prediction()
print(' after saving, prediction =', after_save)
vw.finish_example(ex)
ensure_close(before_save, after_save)
vw.finish() # this should create the file
print('done!')
###Output
_____no_output_____
###Markdown
Predict comparison
###Code
import vowpalwabbit
def my_predict(vw, ex):
pp = 0.0
for f, v in ex.iter_features():
pp += vw.get_weight(f) * v
return pp
def ensure_close(a, b, eps=1e-6):
if abs(a - b) > eps:
raise Exception(
"test failed: expected "
+ str(a)
+ " and "
+ str(b)
+ " to be "
+ str(eps)
+ "-close, but they differ by "
+ str(abs(a - b))
)
###############################################################################
vw = vowpalwabbit.Workspace("--quiet")
###############################################################################
vw.learn("1 |x a b")
###############################################################################
print("# do some stuff with a read example:")
ex = vw.example("1 |x a b |y c")
ex.learn()
ex.learn()
ex.learn()
ex.learn()
updated_pred = ex.get_updated_prediction()
print("current partial prediction =", updated_pred)
# compute our own prediction
print(
" my view of example =",
str([(f, v, vw.get_weight(f)) for f, v in ex.iter_features()]),
)
my_pred = my_predict(vw, ex)
print(" my partial prediction =", my_pred)
ensure_close(updated_pred, my_pred)
print("")
vw.finish_example(ex)
###############################################################################
print("# make our own example from scratch")
ex = vw.example()
ex.set_label_string("0")
ex.push_features("x", ["a", "b"])
ex.push_features("y", [("c", 1.0)])
ex.setup_example()
print(
" my view of example =",
str([(f, v, vw.get_weight(f)) for f, v in ex.iter_features()]),
)
my_pred2 = my_predict(vw, ex)
print(" my partial prediction =", my_pred2)
ensure_close(my_pred, my_pred2)
ex.learn()
ex.learn()
ex.learn()
ex.learn()
print(" final partial prediction =", ex.get_updated_prediction())
ensure_close(ex.get_updated_prediction(), my_predict(vw, ex))
print("")
vw.finish_example(ex)
###############################################################################
exList = []
for i in range(120):
ex = vw.example()
exList.append(ex)
# this is the safe way to delete the examples for VW to reuse:
for ex in exList:
vw.finish_example(ex)
exList = [] # this should __del__ the examples, we hope :)
for i in range(120):
ex = vw.example()
exList.append(ex)
for ex in exList:
vw.finish_example(ex)
###############################################################################
for i in range(2):
ex = vw.example("1 foo| a b")
ex.learn()
print("tag =", ex.get_tag())
print("partial pred =", ex.get_partial_prediction())
print("loss =", ex.get_loss())
print("label =", ex.get_label())
vw.finish_example(ex)
# to be safe, finish explicity (should happen by default anyway)
vw.finish()
###############################################################################
print("# test some save/load behavior")
vw = vowpalwabbit.Workspace("--quiet -f test.model")
ex = vw.example("1 |x a b |y c")
ex.learn()
ex.learn()
ex.learn()
ex.learn()
before_save = ex.get_updated_prediction()
print("before saving, prediction =", before_save)
vw.finish_example(ex)
vw.finish() # this should create the file
# now re-start vw by loading that model
vw = vowpalwabbit.Workspace("--quiet -i test.model")
ex = vw.example("1 |x a b |y c") # test example
ex.learn()
after_save = ex.get_partial_prediction()
print(" after saving, prediction =", after_save)
vw.finish_example(ex)
ensure_close(before_save, after_save)
vw.finish() # this should create the file
print("done!")
###Output
_____no_output_____
###Markdown
Predict comparison
###Code
from vowpalwabbit import pyvw
def my_predict(vw, ex):
pp = 0.
for f,v in ex.iter_features():
pp += vw.get_weight(f) * v
return pp
def ensure_close(a,b,eps=1e-6):
if abs(a-b) > eps:
raise Exception("test failed: expected " + str(a) + " and " + str(b) + " to be " + str(eps) + "-close, but they differ by " + str(abs(a-b)))
###############################################################################
vw = pyvw.Workspace("--quiet")
###############################################################################
vw.learn("1 |x a b")
###############################################################################
print('# do some stuff with a read example:')
ex = vw.example("1 |x a b |y c")
ex.learn() ; ex.learn() ; ex.learn() ; ex.learn()
updated_pred = ex.get_updated_prediction()
print('current partial prediction =', updated_pred)
# compute our own prediction
print(' my view of example =', str([(f,v,vw.get_weight(f)) for f,v in ex.iter_features()]))
my_pred = my_predict(vw, ex)
print(' my partial prediction =', my_pred)
ensure_close(updated_pred, my_pred)
print('')
vw.finish_example(ex)
###############################################################################
print('# make our own example from scratch')
ex = vw.example()
ex.set_label_string("0")
ex.push_features('x', ['a', 'b'])
ex.push_features('y', [('c', 1.)])
ex.setup_example()
print(' my view of example =', str([(f,v,vw.get_weight(f)) for f,v in ex.iter_features()]))
my_pred2 = my_predict(vw, ex)
print(' my partial prediction =', my_pred2)
ensure_close(my_pred, my_pred2)
ex.learn() ; ex.learn() ; ex.learn() ; ex.learn()
print(' final partial prediction =', ex.get_updated_prediction())
ensure_close(ex.get_updated_prediction(), my_predict(vw,ex))
print('')
vw.finish_example(ex)
###############################################################################
exList = []
for i in range(120): # note: if this is >=129, we hang!!!
ex = vw.example()
exList.append(ex)
# this is the safe way to delete the examples for VW to reuse:
for ex in exList:
vw.finish_example(ex)
exList = [] # this should __del__ the examples, we hope :)
for i in range(120): # note: if this is >=129, we hang!!!
ex = vw.example()
exList.append(ex)
for ex in exList:
vw.finish_example(ex)
###############################################################################
for i in range(2):
ex = vw.example("1 foo| a b")
ex.learn()
print('tag =', ex.get_tag())
print('partial pred =', ex.get_partial_prediction())
print('loss =', ex.get_loss())
print('label =', ex.get_label())
vw.finish_example(ex)
# to be safe, finish explicity (should happen by default anyway)
vw.finish()
###############################################################################
print('# test some save/load behavior')
vw = pyvw.Workspace("--quiet -f test.model")
ex = vw.example("1 |x a b |y c")
ex.learn() ; ex.learn() ; ex.learn() ; ex.learn()
before_save = ex.get_updated_prediction()
print('before saving, prediction =', before_save)
vw.finish_example(ex)
vw.finish() # this should create the file
# now re-start vw by loading that model
vw = pyvw.Workspace("--quiet -i test.model")
ex = vw.example("1 |x a b |y c") # test example
ex.learn()
after_save = ex.get_partial_prediction()
print(' after saving, prediction =', after_save)
vw.finish_example(ex)
ensure_close(before_save, after_save)
vw.finish() # this should create the file
print('done!')
###Output
_____no_output_____
|
recommendation.ipynb
|
###Markdown
**Recommendation GUI screen** Meal type and input ingredients are used to recommend top 10 recipes based on recommendation score. Co-occurring ingredients can be found out using the co-occurring ingredient search button. Any of the recommended recipe can be further clicked to get more information about it. ***Created by Rahul Maheshwari***
###Code
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import ast
import pickle
import sys
from tkinter import END, SINGLE
import numpy as np
import pandas as pd
from nltk.stem import WordNetLemmatizer
import pattern as pt
import recipe_info as ri
###Output
_____no_output_____
###Markdown
**Invoking the GUI Screen components**
###Code
try:
import Tkinter as tk
except ImportError:
import tkinter as tk
try:
import ttk
py3 = False
except ImportError:
import tkinter.ttk as ttk
py3 = True
import recommendation_support
def vp_start_gui():
'''Starting point when module is the main routine.'''
global val, w, root
root = tk.Tk()
recommendation_support.set_Tk_var()
top = Toplevel1(root)
recommendation_support.init(root, top)
root.mainloop()
w = None
def create_Toplevel1(rt, *args, **kwargs):
'''Starting point when module is imported by another module.
Correct form of call: 'create_Toplevel1(root, *args, **kwargs)' .'''
global w, w_win, root
# rt = root
root = rt
w = tk.Toplevel(root)
recommendation_support.set_Tk_var()
top = Toplevel1(w)
recommendation_support.init(w, top, *args, **kwargs)
return (w, top)
def destroy_Toplevel1():
global w
w.destroy()
w = None
def suggest_screen():
pt.vp_start_gui()
###Output
_____no_output_____
###Markdown
* **Method to calculate recommendation score based on the ingredients and meal type selected by the user.*** **Normalized Numerical features are used in order to calculate the recommendation score for each recipe of specified meal type and then sorted based on the score and top 10 are shown in the UI.**
###Code
def recommend(data_type_recipes, ingredients, k):
com_val = {}
for i in range(len(data_type_recipes)):
neg_factors = np.mean([data_type_recipes['Calories'].iloc[i], data_type_recipes['Fat'].iloc[i],
data_type_recipes['Cholesterol'].iloc[i]])
pos_factors = np.mean([data_type_recipes['Rating Score'].iloc[i], data_type_recipes['Carbs'].iloc[i],
data_type_recipes['Fiber'].iloc[i], data_type_recipes['Protein'].iloc[i]])
com_val[data_type_recipes['Recipe ID'].iloc[i]] = len(
set(ingredients) & set(data_type_recipes['Lookup Ingredients'].iloc[i])) * (pos_factors - neg_factors)
return sorted(com_val, key=com_val.get, reverse=True)[:k]
###Output
_____no_output_____
###Markdown
**Class containing methods to preprocess ingredients, calculate recommendation score, populating the list box with the recommended recipe title which can further be clicked to get complete information about the required recipe.**
###Code
class Toplevel1:
def recommend_recipes(self):
k=10
lm = WordNetLemmatizer()
ingredients = str(self.Entry1.get()).split(',')
ingredients = [i.strip() for i in ingredients]
ingredients = [lm.lemmatize(i) for i in ingredients]
meal_type = str(self.TCombobox1.get())
lmtzr = WordNetLemmatizer()
# Preprocessing
data = pd.read_csv("recipes.csv")
data2 = data.copy()
temp = []
val = 1.0
for y in data['Calories']:
try:
val = float(y)
except:
y = y.replace(",", "")
val = float(y)
temp.append(val)
data['Calories'] = temp
cols = ['Ingredients', 'Cooking instructions', 'Rating', 'Lookup Ingredients']
for x in cols:
l = []
for y in data[x]:
try:
res = ast.literal_eval(y)
if x == 'Lookup Ingredients':
res = [lmtzr.lemmatize(word) for word in res]
except:
print(y)
l.append(res)
data[x] = l
cols_to_norm = ['Calories', 'Fat', 'Cholesterol', 'Carbs', 'Fiber', 'Protein']
data[cols_to_norm] = data[cols_to_norm].apply(lambda x: (x - x.min()) / (x.max() - x.min()))
data_type_recipes = data[data['Meal'] == meal_type]
data_type_recipes = data_type_recipes.dropna()
recommendations = recommend(data_type_recipes, ingredients, k)
recommendation_urls = []
recommend_recipes = []
for x in recommendations:
val = data_type_recipes.loc[data_type_recipes['Recipe ID'] == x, ['URL', 'Title']]
recommendation_urls.append(list(val['URL'])[0])
recommend_recipes.append(list(val['Title'])[0])
self.Scrolledlistbox1.insert(END, "")
self.Scrolledlistbox1.insert(END, "Top 10 Recommended recipes for " + str(meal_type))
self.Scrolledlistbox1.insert(END, "")
if len(recommend_recipes) != 0:
for r in recommend_recipes:
self.Scrolledlistbox1.insert(END, str(r))
else:
self.Scrolledlistbox1.insert(END, str("No recipes for given ingredients"))
def suggest(self):
lm = WordNetLemmatizer()
ingredients = str(self.Entry1.get()).split(',')
ingredients = [i.strip() for i in ingredients]
ingredients = [lm.lemmatize(i) for i in ingredients]
meal_type = str(self.TCombobox1.get())
dbfile = open('ingredients_pickle', 'wb')
pickle.dump(ingredients, dbfile)
dbfile.close()
dbfile = open('meal_type_pickle', 'wb')
pickle.dump(meal_type, dbfile)
dbfile.close()
suggest_screen()
def list_item_select(self, event):
selected_recipe_name = self.Scrolledlistbox1.get(self.Scrolledlistbox1.curselection())
dbfile = open('selected_recipe_name', 'wb')
pickle.dump(selected_recipe_name, dbfile)
dbfile.close()
ri.vp_start_gui()
# webbrowser.open_new("https://www.kaggle.com/")
def __init__(self, top=None):
"""This class configures and populates the toplevel window.
top is the toplevel containing window."""
_bgcolor = '#d9d9d9' # X11 color: 'gray85'
_fgcolor = '#000000' # X11 color: 'black'
_compcolor = '#d9d9d9' # X11 color: 'gray85'
_ana1color = '#d9d9d9' # X11 color: 'gray85'
_ana2color = '#ececec' # Closest X11 color: 'gray92'
font10 = "-family {Century Gothic} -size 14"
self.style = ttk.Style()
if sys.platform == "win32":
self.style.theme_use('winnative')
self.style.configure('.', background=_bgcolor)
self.style.configure('.', foreground=_fgcolor)
self.style.configure('.', font="TkDefaultFont")
self.style.map('.', background=
[('selected', _compcolor), ('active', _ana2color)])
top.geometry("1920x1001+-101+39")
top.attributes("-fullscreen", True)
top.minsize(148, 1)
top.maxsize(1924, 1055)
top.resizable(1, 1)
top.title("New Toplevel")
top.configure(background="#ed5f83")
top.configure(highlightbackground="#d9d9d9")
top.configure(highlightcolor="black")
self.Label1 = tk.Label(top)
self.Label1.place(relx=0.340, rely=0.09, height=78, width=549)
self.Label1.configure(activebackground="#f9f9f9")
self.Label1.configure(activeforeground="black")
self.Label1.configure(background="#ed5f83")
self.Label1.configure(disabledforeground="#a3a3a3")
self.Label1.configure(font="-family {Trebuchet MS} -size 24 -weight bold")
self.Label1.configure(foreground="#ffffff")
self.Label1.configure(highlightbackground="#d9d9d9")
self.Label1.configure(highlightcolor="black")
self.Label1.configure(text='''Recommendation of Recipes''')
self.Button1 = tk.Button(top)
self.Button1.place(relx=0.922, rely=0.05, height=53, width=68)
self.Button1.configure(activebackground="#ececec")
self.Button1.configure(activeforeground="#000000")
self.Button1.configure(background="#b30000")
self.Button1.configure(disabledforeground="#a3a3a3")
self.Button1.configure(font="-family {Segoe UI} -size 14 -weight bold")
self.Button1.configure(foreground="#ffffff")
self.Button1.configure(highlightbackground="#d9d9d9")
self.Button1.configure(highlightcolor="black")
self.Button1.configure(pady="0")
self.Button1.configure(text='''X''')
self.Button1.configure(command=root.destroy)
meal_list = ['Breakfast', 'Lunch', 'Dinner']
self.TCombobox1 = ttk.Combobox(top, values=meal_list, state='readonly')
self.TCombobox1.place(relx=0.135, rely=0.39, relheight=0.036
, relwidth=0.129)
self.TCombobox1.configure(font="-family {Segoe UI} -size 14")
self.TCombobox1.configure(textvariable=recommendation_support.combobox)
self.TCombobox1.configure(takefocus="")
self.Label1_1 = tk.Label(top)
self.Label1_1.place(relx=0.125, rely=0.34, height=38, width=228)
self.Label1_1.configure(activebackground="#f9f9f9")
self.Label1_1.configure(activeforeground="black")
self.Label1_1.configure(background="#ed5f83")
self.Label1_1.configure(disabledforeground="#a3a3a3")
self.Label1_1.configure(font="-family {Trebuchet MS} -size 14")
self.Label1_1.configure(foreground="#ffffff")
self.Label1_1.configure(highlightbackground="#d9d9d9")
self.Label1_1.configure(highlightcolor="black")
self.Label1_1.configure(text='''Select meal type''')
self.Label1_2 = tk.Label(top)
self.Label1_2.place(relx=0.135, rely=0.48, height=39, width=228)
self.Label1_2.configure(activebackground="#f9f9f9")
self.Label1_2.configure(activeforeground="black")
self.Label1_2.configure(background="#ed5f83")
self.Label1_2.configure(disabledforeground="#a3a3a3")
self.Label1_2.configure(font="-family {Trebuchet MS} -size 14")
self.Label1_2.configure(foreground="#ffffff")
self.Label1_2.configure(highlightbackground="#d9d9d9")
self.Label1_2.configure(highlightcolor="black")
self.Label1_2.configure(text='''Enter Ingredients''')
self.Entry1 = tk.Entry(top)
self.Entry1.place(relx=0.02, rely=0.529, height=34, relwidth=0.377)
self.Entry1.configure(background="white")
self.Entry1.configure(disabledforeground="#a3a3a3")
self.Entry1.configure(font="-family {Courier New} -size 18")
self.Entry1.configure(foreground="#000000")
self.Entry1.configure(highlightbackground="#d9d9d9")
self.Entry1.configure(highlightcolor="black")
self.Entry1.configure(insertbackground="black")
self.Entry1.configure(selectbackground="#c4c4c4")
self.Entry1.configure(selectforeground="black")
self.Button2 = tk.Button(top)
self.Button2.place(relx=0.08, rely=0.609, height=53, width=113)
self.Button2.configure(activebackground="#ececec")
self.Button2.configure(activeforeground="#000000")
self.Button2.configure(background="#b70218")
self.Button2.configure(disabledforeground="#a3a3a3")
self.Button2.configure(font="-family {Segoe UI} -size 14 -weight bold")
self.Button2.configure(foreground="#ffffff")
self.Button2.configure(highlightbackground="#d9d9d9")
self.Button2.configure(highlightcolor="black")
self.Button2.configure(pady="0")
self.Button2.configure(text='''Submit''')
self.Button2.configure(command=self.recommend_recipes)
self.Button2_3 = tk.Button(top)
self.Button2_3.place(relx=0.17, rely=0.609, height=53, width=290)
self.Button2_3.configure(activebackground="#ececec")
self.Button2_3.configure(activeforeground="#000000")
self.Button2_3.configure(background="#152ba4")
self.Button2_3.configure(disabledforeground="#a3a3a3")
self.Button2_3.configure(font="-family {Segoe UI} -size 14 -weight bold")
self.Button2_3.configure(foreground="#ffffff")
self.Button2_3.configure(highlightbackground="#d9d9d9")
self.Button2_3.configure(highlightcolor="black")
self.Button2_3.configure(pady="0")
self.Button2_3.configure(text='''Co-occuring Ingredient Search''')
self.Button2_3.configure(command=self.suggest)
self.Scrolledlistbox1 = ScrolledListBox(top, selectmode=SINGLE)
self.Scrolledlistbox1.place(relx=0.51, rely=0.29, relheight=0.546
, relwidth=0.32)
self.Scrolledlistbox1.configure(background="white")
self.Scrolledlistbox1.configure(cursor="xterm")
self.Scrolledlistbox1.configure(disabledforeground="#a3a3a3")
self.Scrolledlistbox1.configure(font=font10)
self.Scrolledlistbox1.configure(cursor='hand2')
self.Scrolledlistbox1.configure(foreground="black")
self.Scrolledlistbox1.configure(highlightbackground="#d9d9d9")
self.Scrolledlistbox1.configure(highlightcolor="#d9d9d9")
self.Scrolledlistbox1.configure(selectbackground="#c4c4c4")
self.Scrolledlistbox1.configure(selectforeground="black")
self.Scrolledlistbox1.bind('<<ListboxSelect>>',self.list_item_select)
# The following code is added to facilitate the Scrolled widgets you specified.
class AutoScroll(object):
'''Configure the scrollbars for a widget.'''
def __init__(self, master):
# Rozen. Added the try-except clauses so that this class
# could be used for scrolled entry widget for which vertical
# scrolling is not supported. 5/7/14.
try:
vsb = ttk.Scrollbar(master, orient='vertical', command=self.yview)
except:
pass
hsb = ttk.Scrollbar(master, orient='horizontal', command=self.xview)
try:
self.configure(yscrollcommand=self._autoscroll(vsb))
except:
pass
self.configure(xscrollcommand=self._autoscroll(hsb))
self.grid(column=0, row=0, sticky='nsew')
try:
vsb.grid(column=1, row=0, sticky='ns')
except:
pass
hsb.grid(column=0, row=1, sticky='ew')
master.grid_columnconfigure(0, weight=1)
master.grid_rowconfigure(0, weight=1)
# Copy geometry methods of master (taken from ScrolledText.py)
if py3:
methods = tk.Pack.__dict__.keys() | tk.Grid.__dict__.keys() \
| tk.Place.__dict__.keys()
else:
methods = tk.Pack.__dict__.keys() + tk.Grid.__dict__.keys() \
+ tk.Place.__dict__.keys()
for meth in methods:
if meth[0] != '_' and meth not in ('config', 'configure'):
setattr(self, meth, getattr(master, meth))
@staticmethod
def _autoscroll(sbar):
'''Hide and show scrollbar as needed.'''
def wrapped(first, last):
first, last = float(first), float(last)
if first <= 0 and last >= 1:
sbar.grid_remove()
else:
sbar.grid()
sbar.set(first, last)
return wrapped
def __str__(self):
return str(self.master)
def _create_container(func):
'''Creates a ttk Frame with a given master, and use this new frame to
place the scrollbars and the widget.'''
def wrapped(cls, master, **kw):
container = ttk.Frame(master)
container.bind('<Enter>', lambda e: _bound_to_mousewheel(e, container))
container.bind('<Leave>', lambda e: _unbound_to_mousewheel(e, container))
return func(cls, container, **kw)
return wrapped
class ScrolledListBox(AutoScroll, tk.Listbox):
'''A standard Tkinter Listbox widget with scrollbars that will
automatically show/hide as needed.'''
@_create_container
def __init__(self, master, **kw):
tk.Listbox.__init__(self, master, **kw)
AutoScroll.__init__(self, master)
def size_(self):
sz = tk.Listbox.size(self)
return sz
import platform
def _bound_to_mousewheel(event, widget):
child = widget.winfo_children()[0]
if platform.system() == 'Windows' or platform.system() == 'Darwin':
child.bind_all('<MouseWheel>', lambda e: _on_mousewheel(e, child))
child.bind_all('<Shift-MouseWheel>', lambda e: _on_shiftmouse(e, child))
else:
child.bind_all('<Button-4>', lambda e: _on_mousewheel(e, child))
child.bind_all('<Button-5>', lambda e: _on_mousewheel(e, child))
child.bind_all('<Shift-Button-4>', lambda e: _on_shiftmouse(e, child))
child.bind_all('<Shift-Button-5>', lambda e: _on_shiftmouse(e, child))
def _unbound_to_mousewheel(event, widget):
if platform.system() == 'Windows' or platform.system() == 'Darwin':
widget.unbind_all('<MouseWheel>')
widget.unbind_all('<Shift-MouseWheel>')
else:
widget.unbind_all('<Button-4>')
widget.unbind_all('<Button-5>')
widget.unbind_all('<Shift-Button-4>')
widget.unbind_all('<Shift-Button-5>')
def _on_mousewheel(event, widget):
if platform.system() == 'Windows':
widget.yview_scroll(-1 * int(event.delta / 120), 'units')
elif platform.system() == 'Darwin':
widget.yview_scroll(-1 * int(event.delta), 'units')
else:
if event.num == 4:
widget.yview_scroll(-1, 'units')
elif event.num == 5:
widget.yview_scroll(1, 'units')
def _on_shiftmouse(event, widget):
if platform.system() == 'Windows':
widget.xview_scroll(-1 * int(event.delta / 120), 'units')
elif platform.system() == 'Darwin':
widget.xview_scroll(-1 * int(event.delta), 'units')
else:
if event.num == 4:
widget.xview_scroll(-1, 'units')
elif event.num == 5:
widget.xview_scroll(1, 'units')
###Output
_____no_output_____
###Markdown
**Main screen for creating and invoking GUI screen**
###Code
if __name__ == '__main__':
vp_start_gui()
###Output
_____no_output_____
###Markdown
using KNN CLUSTERING
###Code
from sklearn.neighbors import NearestNeighbors
avg_movie_rating.head()
#only include movies with more than 10 ratings
movie_plus_10_ratings = avg_movie_rating.loc[avg_movie_rating['count']>=10]
print(len(movie_plus_10_ratings))
movie_plus_10_ratings
filtered_ratings = pd.merge(movie_plus_10_ratings, ratings, on="movieId")
len(filtered_ratings)
filtered_ratings.head()
#create a matrix table with movieIds on the rows and userIds in the columns.
#replace NAN values with 0
movie_wide = filtered_ratings.pivot(index = 'movieId', columns = 'userId', values = 'rating').fillna(0)
movie_wide.head()
#specify model parameters
model_knn = NearestNeighbors(metric='cosine',algorithm='brute')
#fit model to the data set
model_knn.fit(movie_wide)
#Gets the top 10 nearest neighbours got the movie
def print_similar_movies(query_index) :
#get the list of user ratings for a specific userId
query_index_movie_ratings = movie_wide.loc[query_index,:].values.reshape(1,-1)
#get the closest 10 movies and their distances from the movie specified
distances,indices = model_knn.kneighbors(query_index_movie_ratings,n_neighbors = 11)
#write a loop that prints the similar movies for a specified movie.
for i in range(0,len(distances.flatten())):
#get the title of the random movie that was chosen
get_movie = movie_list.loc[movie_list['movieId']==query_index]['title']
#for the first movie in the list i.e closest print the title
if i==0:
print('Recommendations for {0}:\n'.format(get_movie))
else :
#get the indiciees for the closest movies
indices_flat = indices.flatten()[i]
#get the title of the movie
get_movie = movie_list.loc[movie_list['movieId']==movie_wide.iloc[indices_flat,:].name]['title']
#print the movie
print('{0}: {1}, with distance of {2}:'.format(i,get_movie,distances.flatten()[i]))
print_similar_movies(112552)
print_similar_movies(1)
print_similar_movies(96079)
movies_with_genres.head()
#Getting the movies list with only genres like Musical and other such columns
movie_content_df_temp = movies_with_genres.copy()
movie_content_df_temp.set_index('movieId')
movie_content_df = movie_content_df_temp.drop(columns = ['movieId','title','genres'])
#movie_content_df = movie_content_df.as_matrix()
movie_content_df
# Import linear_kernel
from sklearn.metrics.pairwise import linear_kernel
# Compute the cosine similarity matrix
cosine_sim = linear_kernel(movie_content_df,movie_content_df)
# Similarity of the movies based on the content
cosine_sim
indicies = pd.Series(movie_content_df_temp.index, movie_content_df_temp['title'])
indicies
#Gets the top 10 similar movies based on the content
def get_similar_movies_based_on_content(movie_index) :
sim_scores = list(enumerate(cosine_sim[movie_index]))
# Sort the movies based on the similarity scores
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
# Get the scores of the 10 most similar movies
sim_scores = sim_scores[0:11]
print(sim_scores)
# Get the movie indices
movie_indices = [i[0] for i in sim_scores]
print(movie_indices)
similar_movies = pd.DataFrame(movie_content_df_temp[['title','genres']].iloc[movie_indices])
return similar_movies
indicies["Skyfall (2012)"]
get_similar_movies_based_on_content(18337)
get_similar_movies_based_on_content(19338)
#get ordered list of movieIds
item_indices = pd.DataFrame(sorted(list(set(ratings['movieId']))),columns=['movieId'])
#add in data frame index value to data frame
item_indices['movie_index']=item_indices.index
#inspect data frame
item_indices.head()
#get ordered list of movieIds
user_indices = pd.DataFrame(sorted(list(set(ratings['userId']))),columns=['userId'])
#add in data frame index value to data frame
user_indices['user_index']=user_indices.index
#inspect data frame
user_indices.head()
#join the movie indices
df_with_index = pd.merge(ratings,item_indices,on='movieId')
#join the user indices
df_with_index=pd.merge(df_with_index,user_indices,on='userId')
#inspec the data frame
df_with_index.head()
#import train_test_split module
from sklearn.model_selection import train_test_split
#take 80% as the training set and 20% as the test set
df_train, df_test= train_test_split(df_with_index,test_size=0.2)
print(len(df_train))
print(len(df_test))
df_train.head()
df_test.head()
n_users = ratings.userId.unique().shape[0]
n_items = ratings.movieId.unique().shape[0]
print(n_users)
print(n_items)
#Create two user-item matrices, one for training and another for testing
train_data_matrix = np.zeros((n_users, n_items))
#for every line in the data
for line in df_train.itertuples():
#set the value in the column and row to
#line[1] is userId, line[2] is movieId and line[3] is rating, line[4] is movie_index and line[5] is user_index
train_data_matrix[line[5], line[4]] = line[3]
train_data_matrix.shape
#Create two user-item matrices, one for training and another for testing
test_data_matrix = np.zeros((n_users, n_items))
#for every line in the data
for line in df_test[:1].itertuples():
#set the value in the column and row to
#line[1] is userId, line[2] is movieId and line[3] is rating, line[4] is movie_index and line[5] is user_index
#print(line[2])
test_data_matrix[line[5], line[4]] = line[3]
#train_data_matrix[line['movieId'], line['userId']] = line['rating']
test_data_matrix.shape
pd.DataFrame(train_data_matrix).head()
df_train['rating'].max()
from sklearn.metrics import mean_squared_error
from math import sqrt
def rmse(prediction, ground_truth):
#select prediction values that are non-zero and flatten into 1 array
prediction = prediction[ground_truth.nonzero()].flatten()
#select test values that are non-zero and flatten into 1 array
ground_truth = ground_truth[ground_truth.nonzero()].flatten()
#return RMSE between values
return sqrt(mean_squared_error(prediction, ground_truth))
#Calculate the rmse sscore of SVD using different values of k (latent features)
rmse_list = []
for i in [1,2,5,20,40,60,100,200]:
#apply svd to the test data
u,s,vt = svds(train_data_matrix,k=i)
#get diagonal matrix
s_diag_matrix=np.diag(s)
#predict x with dot product of u s_diag and vt
X_pred = np.dot(np.dot(u,s_diag_matrix),vt)
#calculate rmse score of matrix factorisation predictions
rmse_score = rmse(X_pred,test_data_matrix)
rmse_list.append(rmse_score)
print("Matrix Factorisation with " + str(i) +" latent features has a RMSE of " + str(rmse_score))
#Convert predictions to a DataFrame
mf_pred = pd.DataFrame(X_pred)
mf_pred.head()
df_names = pd.merge(ratings,movie_list,on='movieId')
df_names.head()
#choose a user ID
user_id = 1
#get movies rated by this user id
users_movies = df_names.loc[df_names["userId"]==user_id]
#print how many ratings user has made
print("User ID : " + str(user_id) + " has already rated " + str(len(users_movies)) + " movies")
#list movies that have been rated
users_movies
user_index = df_train.loc[df_train["userId"]==user_id]['user_index'][:1].values[0]
#get movie ratings predicted for this user and sort by highest rating prediction
sorted_user_predictions = pd.DataFrame(mf_pred.iloc[user_index].sort_values(ascending=False))
#rename the columns
sorted_user_predictions.columns=['ratings']
#save the index values as movie id
sorted_user_predictions['movieId']=sorted_user_predictions.index
print("Top 10 predictions for User " + str(user_id))
#display the top 10 predictions for this user
pd.merge(sorted_user_predictions,movie_list, on = 'movieId')[:10]
#count number of unique users
numUsers = df_train.userId.unique().shape[0]
#count number of unitque movies
numMovies = df_train.movieId.unique().shape[0]
print(len(df_train))
print(numUsers)
print(numMovies)
#Separate out the values of the df_train data set into separate variables
Users = df_train['userId'].values
Movies = df_train['movieId'].values
Ratings = df_train['rating'].values
print(Users),print(len(Users))
print(Movies),print(len(Movies))
print(Ratings),print(len(Ratings))
###Output
[863 283 625 ... 952 860 719]
80000
[ 940 2431 8121 ... 2531 8972 93242]
80000
[3. 4.5 5. ... 3. 2. 3.5]
80000
###Markdown
Dimension Departement
###Code
dim_departement = pd.read_excel('orientation_dataset/orientation2.xlsx',sheet_name="DimDepartment")
dim_departement
###Output
_____no_output_____
###Markdown
Dimension Commune
###Code
dim_commune = pd.read_excel('orientation_dataset/orientation2.xlsx',sheet_name="DimCommune")
dim_commune
###Output
_____no_output_____
###Markdown
Dimension Section Communale
###Code
dim_sec_com= pd.read_excel('orientation_dataset/orientation2.xlsx',sheet_name="DimSecCommunale")
dim_sec_com
###Output
_____no_output_____
###Markdown
Dimension KPI Description
###Code
dim_kpi_desc= pd.read_excel('orientation_dataset/orientation2.xlsx',sheet_name="DimKPIDescription")
dim_kpi_desc
###Output
_____no_output_____
###Markdown
Dimension Date
###Code
dim_date= pd.read_excel('orientation_dataset/orientation2.xlsx',sheet_name="DimDate")
dim_date
###Output
_____no_output_____
###Markdown
Key Performance Indicator (KPI) Fact Table
###Code
fact_kpi= pd.read_excel('orientation_dataset/orientation2.xlsx',sheet_name="FactKPITable")
fact_kpi
###Output
_____no_output_____
###Markdown
Scenario 1: [D|D] = [D|C] x [C|M] x ([C|M] x [D|C])T
###Code
template = ('D','D')
max_operations = 4
simulate(template, max_operations, BASIC_TILES)
###Output
Total number of possible tiles after 4 operations: 231
Ranking of the correct answer in the suggestions compatible with [D|D]: 3
Rank of possible compatible answers: [('D', 'C', 'C', 'D'), ('D', 'C', 'C', 'F', 'F', 'C', 'C', 'D'), ('D', 'C', 'C', 'M', 'M', 'C', 'C', 'D')]
###Markdown
Scenario 2: [D|D] = [D|C] x [I|C]T x [I|C] x [D|C]T
###Code
template = ('D','D')
max_operations = 5
simulate(template, max_operations, BASIC_TILES)
###Output
Total number of possible tiles after 5 operations: 450
Ranking of the correct answer in the suggestions compatible with [D|D]: 5
Rank of possible compatible answers: [('D', 'C', 'C', 'D'), ('D', 'C', 'C', 'F', 'F', 'C', 'C', 'D'), ('D', 'C', 'C', 'M', 'M', 'C', 'C', 'D'), ('D', 'C', 'C', 'D', 'D', 'C', 'C', 'D'), ('D', 'C', 'C', 'I', 'I', 'C', 'C', 'D')]
###Markdown
Scenario 3: [D|Cl] = [D|C] x [C|M] x [Cl|M]T
###Code
template = ('D','Cl')
max_operations = 3
simulate(template, max_operations, BASIC_TILES)
###Output
Total number of possible tiles after 3 operations: 117
Ranking of the correct answer in the suggestions compatible with [D|Cl]: 2
Rank of possible compatible answers: [('D', 'C', 'C', 'F', 'F', 'Cl'), ('D', 'C', 'C', 'M', 'M', 'Cl')]
###Markdown
Scenario 4: [C|Cl] = [C|M] x [Cl|M]T
###Code
template = ('C','Cl')
max_operations = 2
simulate(template, max_operations, BASIC_TILES)
###Output
Total number of possible tiles after 2 operations: 54
Ranking of the correct answer in the suggestions compatible with [C|Cl]: 2
Rank of possible compatible answers: [('C', 'F', 'F', 'Cl'), ('C', 'M', 'M', 'Cl')]
|
Session-04/notebooks/MNIST_model_03.ipynb
|
###Markdown
MNIST CNN model**Target to achieve** : 99.4% accuracy on test dataset.
###Code
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir("./drive/My Drive/EVA/Session04")
###Output
_____no_output_____
###Markdown
Importing Libraries
###Code
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torchsummary import summary
from tqdm import tqdm
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20,10)
###Output
_____no_output_____
###Markdown
GPU for training
###Code
import tensorflow as tf
device_name = tf.test.gpu_device_name()
try:
print(f"Found GPU at : {device_name}")
except:
print("GPU device not found.")
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
use_cuda = True
print(f"Number of GPU's available : {torch.cuda.device_count()}")
print(f"GPU device name : {torch.cuda.get_device_name(0)}")
else:
print("No GPU available, using CPU instead")
device = torch.device("cpu")
use_cuda = False
###Output
Number of GPU's available : 1
GPU device name : Tesla P4
###Markdown
Downloading MNIST datasetThings to keep in mind, - the dataset is provided by pytorch community.- MNIST dataset contains: - 60,000 training images - 10,000 test images - Each image is of size (28x28x1).- The values 0.1307 and 0.3081 used for the Normalize() transformation below are the global mean and standard deviation for MNIST dataset.
###Code
batch_size = 128
num_epochs = 20
kernel_size = 3
pool_size = 2
lr = 0.01
momentum = 0.9
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
mnist_trainset = datasets.MNIST(root="./data", train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
mnist_testset = datasets.MNIST(root="./data", train=False, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
train_loader = torch.utils.data.DataLoader(mnist_trainset,
batch_size=batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(mnist_testset,
batch_size=batch_size, shuffle=True, **kwargs)
###Output
_____no_output_____
###Markdown
Visualization of images
###Code
examples = enumerate(train_loader)
batch_idx, (example_data, example_targets) = next(examples)
fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(example_data[i][0], interpolation='none')
plt.title(f"Ground Truth : {example_targets[i]}")
###Output
_____no_output_____
###Markdown
Defining training and testing functions
###Code
from tqdm import tqdm
def train(model, device, train_loader, optimizer, epoch):
running_loss = 0.0
running_correct = 0
model.train()
pbar = tqdm(train_loader)
for batch_idx, (data, target) in enumerate(pbar):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
_, preds = torch.max(output.data, 1)
loss.backward()
optimizer.step()
#calculate training running loss
running_loss += loss.item()
running_correct += (preds == target).sum().item()
pbar.set_description(desc= f'loss={loss.item()} batch_id={batch_idx}')
print("\n")
print(f"Epoch {epoch} train loss: {running_loss/len(mnist_trainset):.3f} train acc: {running_correct/len(mnist_trainset):.3f}")
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.3f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
###Output
_____no_output_____
###Markdown
Building the model
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.drop = nn.Dropout2d(0.1)
self.conv1 = nn.Conv2d(1, 16, 3, padding=1) #(-1,28,28,3)>(-1,3,3,3,16)>(-1,28,28,16)
self.batchnorm1 = nn.BatchNorm2d(16) #(-1,28,28,16)
self.conv2 = nn.Conv2d(16, 16, 3, padding=1) #(-1,28,28,16)>(-1,3,3,16,16)>(-1,28,28,16)
self.batchnorm2 = nn.BatchNorm2d(16) #(-1,28,28,16)
self.pool1 = nn.MaxPool2d(2, 2) #(-1,14,14,16)
self.conv3 = nn.Conv2d(16, 16, 3, padding=1) #(-1,14,14,16)>(-1,3,3,16,16)>(-1,14,14,16)
self.batchnorm3 = nn.BatchNorm2d(16) #(-1,14,14,16)
self.conv4 = nn.Conv2d(16, 16, 3, padding=1) #(-1,14,14,16)>(-1,3,3,16,16)>(-1,14,14,16)
self.batchnorm4 = nn.BatchNorm2d(16) #(-1,14,14,16)
self.pool2 = nn.MaxPool2d(2, 2) #(-1,7,7,16)
self.conv5 = nn.Conv2d(16, 32, 3, padding=1) #(-1,7,7,16)>(-1,3,3,16,32)>(-1,7,7,32)
self.batchnorm5 = nn.BatchNorm2d(32)
self.conv6 = nn.Conv2d(32, 16, 3, padding=1) #(-1,7,7,32)>(-1,3,3,32,16)>(-1,7,7,16)
self.batchnorm6 = nn.BatchNorm2d(16)
self.conv7 = nn.Conv2d(16, 10, 3) #(-1,7,7,16)>(-1,3,3,16,10)>(-1,5,5,10)
self.avgpool = nn.AvgPool2d(5)
def forward(self, x):
x = F.relu(self.conv1(x))
x = self.drop(x)
x = self.batchnorm1(x)
x = F.relu(self.conv2(x))
x = self.drop(x)
x = self.batchnorm2(x)
x = self.pool1(x)
x = F.relu(self.conv3(x))
x = self.drop(x)
x = self.batchnorm3(x)
x = F.relu(self.conv4(x))
x = self.drop(x)
x = self.batchnorm4(x)
x = self.pool2(x)
x = F.relu(self.conv5(x))
x = self.drop(x)
x = self.batchnorm5(x)
x = F.relu(self.conv6(x))
x = self.drop(x)
x = self.batchnorm6(x)
x = self.conv7(x)
x = self.avgpool(x)
x = x.view(-1, 10)
return F.log_softmax(x)
model5 = Net().to(device)
summary(model5, input_size=(1, 28, 28))
# optimizer = optim.SGD(model5.parameters(), lr=lr, momentum=momentum)
optimizer = optim.Adam(model5.parameters(), lr=0.001)#, momentum=momentum)
for epoch in range(1, num_epochs+1):
train(model5, device, train_loader, optimizer, epoch)
test(model5, device, test_loader)
###Output
_____no_output_____
|
2019/IWBDA19/workshop/sbolWorkshop2018.ipynb
|
###Markdown
Introduction1. Create an account on SynBioHub2. Make sure you've downloaded `parts.xml` and it is placed somewhere convenient on your computer.3. Make sure you've downloaded `results.txt` and it is placed somewhere convenient on your computer.4. Install SBOL library in language of choice Getting a Device from an SBOL Compliant XML
###Code
from sbol import *
# Set the default namespace (e.g. “http://my_namespace.org”)
# Load some generic parts from `parts.xml` into another Document
# Inspect the Document
###Output
_____no_output_____
###Markdown
Getting a Device from Synbiohub
###Code
# Start an interface to igem’s public part shop on SynBioHub. Located at `https://synbiohub.org/public/igem`
# Search the part shop for parts from the iGEM interlab study using the search term `interlab`
# Import the medium strength device into your document
###Output
_____no_output_____
###Markdown
Extracting ComponentDefinitions from a Pre-existing Device
###Code
# Extract the medium strength promoter `BBa_J23106` from your document.
# Extract the ribosomal binding site (rbs) `Q2` from your document.
# Extract the coding region (cds) `LuxR` from your document.
# Extract the terminator `ECK120010818` from your document.
###Output
_____no_output_____
###Markdown
Creating a New Device
###Code
# Create a new empty device named `my_device`
# Assemble the new device from the promoter, rbs, cds, and terminator from above.
# Compile the sequence for the new device
# Set the role of the device with the Sequence Ontology term `gene`
###Output
_____no_output_____
###Markdown
Managing a Design-Build-Test-Learn Workflow
###Code
# Create a new design in your document called `my_design`.
# Set the structure of the design to `my_device` from above, and the function of the device to
# `None` (not covered in this tutorial)
# Create three Activities [‘build`, `test`, `analysis`]
# Generate a build for your design out of your `build` activity. Name the result of the build step `transformed_cells`.
# Generate a test for your build out of your `test` activity. Name the test `my_experiment`.
# Generate an analysis of your test out of your `analysis` activity. Name the analysis `my_analysis`.
# Create Plans for each Activity: set the`build` plan to `transformation`, the `test` plan
# to `promoter_characterization`, and the `analysis` plan to `parameter_optimization`
# Temporarily disable auto-construction of URIs (For setting Agent URIs)
# Set Agents for each Activity: set the `build` agent to `mailto:[email protected]`, the `test` agent
# to `http://sys-bio.org/plate_reader_1`, and the `analysis` agent to `http://tellurium.analogmachine.org`
# Re-enable auto-construction of URIs
###Output
_____no_output_____
###Markdown
Uploading the Device Back to SynBioHub
###Code
# Connect to your account on SynBioHub
# Give your document a displayId, name, and description
# (e.g. `my_device`, `my device`, `a newly characterized device`)
# Submit the document to the part shop
###Output
_____no_output_____
|
_docs/nbs/2022-02-01-mlflow-part3.ipynb
|
###Markdown
MLFlow Part 3 Environment setup
###Code
import os
project_name = "reco-tut-de"; branch = "main"; account = "sparsh-ai"
project_path = os.path.join('/content', project_name)
if not os.path.exists(project_path):
!pip install -U -q dvc dvc[gdrive]
!pip install -q mlflow
!apt-get install tree
!cp /content/drive/MyDrive/mykeys.py /content
import mykeys
!rm /content/mykeys.py
path = "/content/" + project_name;
!mkdir "{path}"
%cd "{path}"
import sys; sys.path.append(path)
!git config --global user.email "[email protected]"
!git config --global user.name "reco-tut"
!git init
!git remote add origin https://"{mykeys.git_token}":[email protected]/"{account}"/"{project_name}".git
!git pull origin "{branch}"
!git checkout main
else:
%cd "{project_path}"
!git status
!git add . && git commit -m 'commit' && git push origin "{branch}"
!dvc commit && dvc push
%reload_ext autoreload
%autoreload 2
!make setup
###Output
_____no_output_____
###Markdown
Pull specific data file
###Code
!dvc pull ./data/silver/stockpred/train.csv.dvc
###Output
_____no_output_____
###Markdown
Reinitiate old project - ```stockpred_comparisons```
###Code
!cd /content/reco-tut-de && dvc pull -q ./src/mlflow/stockpred_comparisons/mlruns.dvc
from src.mlflow.utils import MLFlow
stockpred = MLFlow()
stockpred.create_project(name='stockpred_comparisons',
basepath='/content/reco-tut-de/src/mlflow',
entryfile='train.py')
stockpred.get_ui()
###Output
Project path already exists!
https://t1656s8q7s-496ff2e9c6d22116-18139-colab.googleusercontent.com/
###Markdown
Load model as a PyFuncModels
###Code
import pandas as pd
import mlflow
from sklearn.model_selection import train_test_split
logged_model = './mlruns/1/f1ccd1a06c3d4eec863dc1816f588b40/artifacts/model'
# Load model as a PyFuncModel
loaded_model = mlflow.pyfunc.load_model(logged_model)
# Load Data
pandas_df = pd.read_csv(os.path.join(project_path,'data/silver/stockpred/train.csv'))
X = pandas_df.iloc[:,:-1]
Y = pandas_df.iloc[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=4284, stratify=Y)
# Predict on a Pandas DataFrame
loaded_model.predict(pd.DataFrame(X_test))
###Output
_____no_output_____
###Markdown
Hyperparameter Tuning
###Code
!pip install -q hyperopt
# Import variables
from hyperopt import tpe
from hyperopt import STATUS_OK
from hyperopt import Trials
from hyperopt import hp
from hyperopt import fmin
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
import pandas
import mlflow
# Retrieve Data
pandas_df = pandas.read_csv(os.path.join(project_path,'data/silver/stockpred/train.csv'))
X = pandas_df.iloc[:,:-1]
Y = pandas_df.iloc[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=4284, stratify=Y)
# Define objective function
N_FOLDS = 4
MAX_EVALS = 10
def objective(params, n_folds = N_FOLDS):
"""Objective function for Logistic Regression Hyperparameter Tuning"""
# Perform n_fold cross validation with hyperparameters
# Use early stopping and evaluate based on ROC AUC
mlflow.sklearn.autolog()
with mlflow.start_run(nested=True):
clf = LogisticRegression(**params,random_state=0,verbose =0)
scores = cross_val_score(clf, X_train, y_train, cv=5, scoring='f1_macro')
# Extract the best score
best_score = max(scores)
# Loss must be minimized
loss = 1 - best_score
# Log the metric
mlflow.log_metric(key="f1_experiment_score", value=best_score)
# Dictionary with information for evaluation
return {'loss': loss, 'params': params, 'status': STATUS_OK}
# Define parameter space
space = {
'warm_start' : hp.choice('warm_start', [True, False]),
'fit_intercept' : hp.choice('fit_intercept', [True, False]),
'tol' : hp.uniform('tol', 0.00001, 0.0001),
'C' : hp.uniform('C', 0.05, 3),
'solver' : hp.choice('solver', ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga']),
'max_iter' : hp.choice('max_iter', range(5,1000))
}
# Create experiment
mlflow.set_experiment("HyperOpt_Logsitic")
# Define Optimization Trials
tpe_algorithm = tpe.suggest
# Trials object to track progress
bayes_trials = Trials()
with mlflow.start_run():
best = fmin(fn = objective, space = space, algo = tpe.suggest, max_evals = MAX_EVALS, trials = bayes_trials)
best
###Output
_____no_output_____
|
Custom_Dataset_BBC.ipynb
|
###Markdown
Training on Custom Dataset Training the model on BBC extractive summaries dataset after required preprocessing
###Code
#Required Imports for dataset analysis plots and file handling
import os
import matplotlib.pyplot as plt
import seaborn as sns
#BBC News Summary Dataset distribution across 5 categories
#Set the path to the BBC dataset
BBC_DATA_PATH = "/"
classes = os.listdir( BBC_DATA_PATH + "/BBC News Summary/News Articles")
no_1=[]
for cat in classes:
no_1.append(len(os.listdir( BBC_DATA_PATH + "/BBC News Summary/News Articles/"+str(cat))))
plt.figure(figsize=(16,10))
dist = sns.barplot(x=classes,y=no_1,palette='coolwarm')
dist.set_xticklabels(dist.get_xticklabels(),rotation=0)
###Output
_____no_output_____
###Markdown
The following 5 cells are for preparing the dataset in the format required before preprocessing. This requires the one text file per sample with the news article at the top and each corresponding summary at the bottom under "@highlight" tags. An example is as follows:Veteran Martinez wins Thai titleConchita Martinez won her first title in almost five years with victory over Anna-Lena Groenefeld at the Volvo Women's Open in Pattaya, Thailand.The 32-year-old Spaniard came through 6-3 3-6 6-3 for her first title since Berlin in 2000. "It feels really good," said Martinez, who is playing her last season on the Tour. "To come through like that in an important match feels good. "It's been nearly five years and I didn't think I could do it." Groenefeld was the more powerful player but could not match her opponent's relentless accuracy. "It was my first final, a new experience," said the German. "I think she played a good match, a tough match, but I tried to stay in there. I think the whole week was good for me."@highlight"To come through like that in an important match feels good@highlight""I think she played a good match, a tough match, but I tried to stay in there@highlightGroenefeld was the more powerful player but could not match her opponent's relentless accuracy
###Code
for filename in os.listdir(BBC_DATA_PATH + "/BBC News Summary/News Articles/business"):
with open(BBC_DATA_PATH + "/BBC News Summary/Summaries/business/"+filename) as f1:
with open(BBC_DATA_PATH + "/BBC News Summary/News Articles/business/"+filename,'a') as f2:
for line in f1:
lines = line.split('.')
lines.sort(key = len)
lines = lines[-3:]
for summ in lines:
f2.write("\n")
f2.write("@highlight\n")
f2.write("\n")
f2.write(summ+"\n")
for filename in os.listdir(BBC_DATA_PATH + "/BBC/BBC News Summary/News Articles/entertainment"):
with open(BBC_DATA_PATH + "/BBC/BBC News Summary/Summaries/entertainment/"+filename) as f1:
with open( BBC_DATA_PATH + "/BBC News Summary/News Articles/entertainment/"+filename,'a') as f2:
for line in f1:
lines = line.split('.')
lines.sort(key = len)
lines = lines[-3:]
for summ in lines:
f2.write("\n")
f2.write("@highlight\n")
f2.write("\n")
f2.write(summ+"\n")
for filename in os.listdir( BBC_DATA_PATH + "/BBC News Summary/News Articles/politics"):
with open( BBC_DATA_PATH + "/BBC News Summary/Summaries/politics/"+filename) as f1:
with open( BBC_DATA_PATH + "/BBC News Summary/News Articles/politics/"+filename,'a') as f2:
for line in f1:
lines = line.split('.')
lines.sort(key = len)
lines = lines[-3:]
for summ in lines:
f2.write("\n")
f2.write("@highlight\n")
f2.write("\n")
f2.write(summ+"\n")
for filename in os.listdir( BBC_DATA_PATH + "/BBC News Summary/News Articles/sport"):
with open( BBC_DATA_PATH + "/BBC News Summary/Summaries/sport/"+filename) as f1:
with open( BBC_DATA_PATH + "/BBC News Summary/News Articles/sport/"+filename,'a') as f2:
for line in f1:
lines = line.split('.')
lines.sort(key = len)
lines = lines[-3:]
for summ in lines:
f2.write("\n")
f2.write("@highlight\n")
f2.write("\n")
f2.write(summ+"\n")
for filename in os.listdir( BBC_DATA_PATH + "/BBC News Summary/News Articles/tech"):
with open( BBC_DATA_PATH + "/BBC News Summary/Summaries/tech/"+filename) as f1:
with open( BBC_DATA_PATH + "/BBC News Summary/News Articles/tech/"+filename,'a') as f2:
for line in f1:
lines = line.split('.')
lines.sort(key = len)
lines = lines[-3:]
for summ in lines:
f2.write("\n")
f2.write("@highlight\n")
f2.write("\n")
f2.write(summ+"\n")
#Running a test on the imported StanfordCoreNLP tools for tokenization
!echo "Please tokenize this text." | java -cp ]stanford-corenlp-4.2.0/stanford-corenlp-4.2.0.jar edu.stanford.nlp.process.PTBTokenizer
#Adding ".story" extensions to prepared files
#Set the CUSTOM_DATA_PATH
CUSTOM_DATA_PATH = "/"
path = CUSTOM_DATA_PATH
files = os.listdir(path)
for index, file in enumerate(files):
#print(index,file[:-3])
os.rename(os.path.join(path, file), os.path.join(path, ''.join([file[:-4], '.story'])))
###Output
_____no_output_____
###Markdown
DATA PRE-PROCESSING
###Code
#Sentence Splitting and Tokenization
#The outputs are the samples tokenized saved as json files
#Set the paths indicated in all caps
!python preprocess.py -mode tokenize -raw_path CUSTOM_DATA_PATH -save_path SAVE_PATH
#Format to Simpler Json Files
#Set the paths indicated in all caps
!python preprocess.py -mode format_to_lines -raw_path CUSTOM_TOKENIZED_PATH -save_path JSON_SAVE_PATH -n_cpus 1 -use_bert_basic_tokenizer false
#Formatting to PyTorch Files (.pt) as final step of data preprocessing
#Since dataset is relatively smaller, we have used its entirety for training data
#Set the paths indicated in all caps
!python preprocess.py -mode format_to_bert -raw_path JSON_DATA_PATH -save_path PT_SAVE_PATH -lower -n_cpus 1 -log_file ../logs/preprocess.log
###Output
_____no_output_____
###Markdown
MODEL TRAINING
###Code
#Set the paths indicated in all caps
!python train.py -task ext -mode train -bert_data_path BERT_DATA_PATH -ext_dropout 0.1 -model_path MODEL_PATH -lr 2e-3 -visible_gpus 0 -report_every 50 -save_checkpoint_steps 1000 -batch_size 300 -train_steps 10000 -accum_count 5 -log_file ../logs/custom -use_interval true -warmup_steps 10000 -max_pos 512
###Output
_____no_output_____
###Markdown
GENERATING SUMMARIES FROM RAW TEXT INPUT HERE
###Code
#Set the paths indicated in all caps
!python train.py -task ext -mode test_text -text_src TEXT_SRC_PATH -result_path RESULT_PATH -test_from MODEL_CKPT_PATH
###Output
_____no_output_____
|
examples/reference/elements/matplotlib/BoxWhisker.ipynb
|
###Markdown
Title BoxWhisker Element Dependencies Matplotlib Backends Matplotlib Bokeh Plotly
###Code
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``BoxWhisker`` Element is a quick way of visually summarizing one or more groups of numerical data through their quartiles. The boxes of a ``BoxWhisker`` element represent the first, second and third quartiles. The whiskers follow the Tukey boxplot definition representing the lowest datum still within 1.5 IQR of the lower quartile, and the highest datum still within 1.5 IQR of the upper quartile. Any points falling outside this range are shown as distinct outlier points.The data of a ``BoxWhisker`` Element may have any number of key dimensions representing the grouping of the value dimension and a single value dimensions representing the distribution of values within each group. See the [Tabular Datasets](../../../user_guide/07-Tabular_Datasets.ipynb) user guide for supported data formats, which include arrays, pandas dataframes and dictionaries of arrays. Without any groups a BoxWhisker Element represents a single distribution of values:
###Code
hv.BoxWhisker(np.random.randn(1000), vdims='Value')
###Output
_____no_output_____
###Markdown
By supplying key dimensions we can compare our distributions across multiple variables.
###Code
groups = [chr(65+g) for g in np.random.randint(0, 3, 200)]
box = hv.BoxWhisker((groups, np.random.randint(0, 5, 200), np.random.randn(200)),
['Group', 'Category'], 'Value').sort()
box.opts(opts.BoxWhisker(aspect=2, fig_size=200, whiskerprops={'color': 'gray'}))
###Output
_____no_output_____
###Markdown
Title BoxWhisker Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``BoxWhisker`` Element is a quick way of visually summarizing one or more groups of numerical data through their quartiles. The boxes of a ``BoxWhisker`` element represent the first, second and third quartiles. The whiskers follow the Tukey boxplot definition representing the lowest datum still within 1.5 IQR of the lower quartile, and the highest datum still within 1.5 IQR of the upper quartile. Any points falling outside this range are shown as distinct outlier points.The data of a ``BoxWhisker`` Element may have any number of key dimensions representing the grouping of the value dimension and a single value dimensions representing the distribution of values within each group. See the [Tabular Datasets](../../../user_guide/07-Tabular_Datasets.ipynb) user guide for supported data formats, which include arrays, pandas dataframes and dictionaries of arrays. Without any groups a BoxWhisker Element represents a single distribution of values:
###Code
hv.BoxWhisker(np.random.randn(1000), vdims='Value')
###Output
_____no_output_____
###Markdown
By supplying key dimensions we can compare our distributions across multiple variables.
###Code
%%opts BoxWhisker [aspect=2 fig_size=200 show_legend=False] (whiskerprops={'color': 'gray'})
groups = [chr(65+g) for g in np.random.randint(0, 3, 200)]
hv.BoxWhisker((groups, np.random.randint(0, 5, 200), np.random.randn(200)),
['Group', 'Category'], 'Value').sort()
###Output
_____no_output_____
###Markdown
Title BoxWhisker Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``BoxWhisker`` Element is a quick way of visually summarizing one or more groups of numerical data through their quartiles. The boxes of a ``BoxWhisker`` element represent the first, second and third quartiles. The whiskers follow the Tukey boxplot definition representing the lowest datum still within 1.5 IQR of the lower quartile, and the highest datum still within 1.5 IQR of the upper quartile. Any points falling outside this range are shown as distinct outlier points.The data of a ``BoxWhisker`` Element may have any number of key dimensions representing the grouping of the value dimension and a single value dimensions representing the distribution of values within each group. See the [Tabular Datasets](../../../user_guide/07-Tabular_Datasets.ipynb) user guide for supported data formats, which include arrays, pandas dataframes and dictionaries of arrays. Without any groups a BoxWhisker Element represents a single distribution of values:
###Code
hv.BoxWhisker(np.random.randn(1000), vdims='Value')
###Output
_____no_output_____
###Markdown
By supplying key dimensions we can compare our distributions across multiple variables.
###Code
groups = [chr(65+g) for g in np.random.randint(0, 3, 200)]
box = hv.BoxWhisker((groups, np.random.randint(0, 5, 200), np.random.randn(200)),
['Group', 'Category'], 'Value').sort()
box.opts(opts.BoxWhisker(aspect=2, fig_size=200, whiskerprops={'color': 'gray'}))
###Output
_____no_output_____
###Markdown
Title BoxWhisker Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``BoxWhisker`` Element is a quick way of visually summarizing one or more groups of numerical data through their quartiles. The boxes of a ``BoxWhisker`` element represent the first, second and third quartiles. The whiskers follow the Tukey boxplot definition representing the lowest datum still within 1.5 IQR of the lower quartile, and the highest datum still within 1.5 IQR of the upper quartile. Any points falling outside this range are shown as distinct outlier points.The data of a ``BoxWhisker`` Element may have any number of key dimensions representing the grouping of the value dimension and a single value dimensions representing the distribution of values within each group. See the [Tabular Datasets](../../../user_guide/07-Tabular_Datasets.ipynb) user guide for supported data formats, which include arrays, pandas dataframes and dictionaries of arrays. Without any groups a BoxWhisker Element represents a single distribution of values:
###Code
hv.BoxWhisker(np.random.randn(1000), vdims='Value')
###Output
_____no_output_____
###Markdown
By supplying key dimensions we can compare our distributions across multiple variables.
###Code
%%opts BoxWhisker [width=600 height=400 show_legend=False] (whisker_color='gray' color='white')
groups = [chr(65+g) for g in np.random.randint(0, 3, 200)]
hv.BoxWhisker((groups, np.random.randint(0, 5, 200), np.random.randn(200)),
['Group', 'Category'], 'Value').sort()
###Output
_____no_output_____
###Markdown
Title BoxWhisker Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``BoxWhisker`` Element is a quick way of visually summarizing one or more groups of numerical data through their quartiles. The data of a ``BoxWhisker`` Element may have any number of key dimensions representing the grouping of the value dimension and a single value dimensions representing the distribution of values within each group. See the [Tabular Datasets](../../../user_guide/07-Tabular_Datasets.ipynb) user guide for supported data formats, which include arrays, pandas dataframes and dictionaries of arrays. Without any groups a BoxWhisker Element represents a single distribution of values:
###Code
hv.BoxWhisker(np.random.randn(1000), vdims=['Value'])
###Output
_____no_output_____
###Markdown
By supplying key dimensions we can compare our distributions across multiple variables.
###Code
%%opts BoxWhisker [width=600 height=400 show_legend=False] (whisker_color='gray' color='white')
groups = [chr(65+g) for g in np.random.randint(0, 3, 200)]
hv.BoxWhisker((groups, np.random.randint(0, 5, 200), np.random.randn(200)),
kdims=['Group', 'Category'], vdims=['Value']).sort()
###Output
_____no_output_____
|
nbs/dl1/lesson1-workbook.ipynb
|
###Markdown
Lesson 1 - Workbook
###Code
# Automatically reload libraries when edited
%reload_ext autoreload
%autoreload 2
# Display charts and images in the notebook
%matplotlib inline
# Import packages
from fastai.vision import *
from fastai.metrics import error_rate
# Set batch size
bs = 64
help(untar_data)
path = untar_data(URLs.PETS); path
###Output
_____no_output_____
|
Trading_Strategies/Strategy_Evalution_Tools/Turtle_Evaluation.ipynb
|
###Markdown
海龟交易法则-稳健投资策略评估方法海龟交易法则书中提到几种评估稳健投资策略的方法:1. RAR - Regressed Annual ReturnRAR 比CAGR对于一个波动的净值曲线而言, 更能够稳定的测量该曲线的上涨幅度和回报率。
###Code
def RAR(ret) :
n = ret.count()
nav = (1+ret).cumprod()
cagr = (nav[-1]/nav[0] -1) / n
reg = linear_model.LinearRegression()
X = np.array(range(n), ndmin=2).transpose();
y = np.array(nav.data, ndmin=2).transpose()
reg.fit(X, y)
rar = (reg.predict(X[-1]) / reg.predict(X[0]) -1) / n
rar = np.float64(rar)
return cagr, rar, reg
def Sharpe(ret, annualized_factor = 365) :
cagr, rar, reg = RAR(ret)
cagr = cagr * annualized_factor
rar = rar * annualized_factor
vol = np.std(ret) * np.sqrt(annualized_factor)
r_sharpe = rar / vol
sharpe = cagr / vol
return sharpe, r_sharpe
###Output
_____no_output_____
###Markdown
测试
###Code
ret_test = ret_all['cc']
cagr, rar, reg = RAR(ret_test)
nav = (1+ret_test).cumprod()
X = np.array(range(nav.count()), ndmin=2).transpose();
plt.plot(nav)
plt.plot(X, reg.predict(X),color='red',linewidth=4)
[rar, cagr]
sharpe, r_sharpe = Sharpe(ret_test)
[sharpe, r_sharpe]
reg.coef_
linear_model.LinearRegression?
ret_test = ret_all['cc'][1:500]
cagr, rar, reg = RAR(ret_test)
nav = (1+ret_test).cumprod()
X = np.array(range(nav.count()), ndmin=2).transpose();
plt.plot(nav)
plt.plot(X, reg.predict(X),color='red',linewidth=4)
[rar, cagr]
sharpe, r_sharpe = Sharpe(ret)
sharpe1, r_sharpe1 = Sharpe(ret[1:1000])
[sharpe, r_sharpe, sharpe1, r_sharpe1]
###Output
_____no_output_____
###Markdown
2 稳健风险回报比例(robust risk/reward ratio)
###Code
def MDD(ret, N) :
ret = ret_cc.dropna()
ret = ret[100::]
nav = (1+ret).cumprod();
high_wm = nav * 0 #high water mark
for i in range(len(ret)) :
if i == 0:
high_wm[i] = nav[i]
else:
high_wm[i] = nav[i] if nav[i] > high_wm[i-1] else high_wm[i-1]
dd = nav - high_wm ## drawdown curves
### determine the numbers of the drawdown periods, and their start/end index
start = []
end = []
for j in range(len(dd)) :
if j > 0:
if dd[j] < 0 and dd[j - 1] == 0:
start.append(j);
if dd[j] == 0 and dd[j -1] < 0:
end.append(j);
if dd[j] <0 and j == len(dd) - 1:
end.append(j);
### drawdown percentage
dd_pct = dd * 0
n_dd = len(start)
for k in range(n_dd):
dd_pct[start[k]:end[k]] = nav[start[k]:end[k]] / nav[start[k]-1] - 1
###
dd_size = []
dd_duration = []
n_dd = len(start)
for k in range(n_dd):
dd_size.append(min(dd_pct[start[k]:end[k]]))
dd_duration.append(end[k] - start[k])
### top N largest drawdown
max_dd_size = []
max_dd_duration = []
for l in range(N) :
max_dd = min(dd_size)
index = dd_size.index(max_dd)
max_dd_size.append(dd_size.pop(index))
max_dd_duration.append(dd_duration.pop(index))
### output
return max_dd_size, max_dd_duration
### length_adjusted_MDD annualize MDD with their average length.
### the formula is : Average_Max_DD / Average_DD_Duration * Annulized_factor (365 by default, if days are using)
def length_adjusted_MDD(ret, N = 5, annulized_factor = 365) :
max_dd_size, max_dd_duration = MDD(ret, N);
avg_mdd = np.mean(max_dd_size)
avg_mdd_duration = np.mean(max_dd_duration)
la_MDD = avg_mdd / avg_mdd_duration * annulized_factor
return la_MDD
def RRR(ret, N = 5, annulized_factor = 365):
cagr, rar, reg= RAR(ret)
rar = rar * annulized_factor
la_mdd = length_adjusted_MDD(ret, N, annulized_factor)
rrr = rar / abs(la_mdd)
return rrr
###Output
_____no_output_____
###Markdown
测试
###Code
rrr = RRR(ret, 5)
rrr1 = RRR(ret, 10)
[rrr, rrr1]
###Output
_____no_output_____
|
dataScience/02_NapatRc_l5_Matplotlib_Exercises.ipynb
|
###Markdown
Matplotlib Exercises Welcome to the exercises for reviewing matplotlib! Take your time with these, Matplotlib can be tricky to understand at first. These are relatively simple plots, but they can be hard if this is your first time with matplotlib, feel free to reference the solutions as you go along.Also don't worry if you find the matplotlib syntax frustrating, we actually won't be using it that often throughout the course, we will switch to using seaborn and pandas built-in visualization capabilities. But, those are built-off of matplotlib, which is why it is still important to get exposure to it!** * NOTE: ALL THE COMMANDS FOR PLOTTING A FIGURE SHOULD ALL GO IN THE SAME CELL. SEPARATING THEM OUT INTO MULTIPLE CELLS MAY CAUSE NOTHING TO SHOW UP. * ** ExercisesFollow the instructions to recreate the plots using this data: Data
###Code
import numpy as np
x = np.arange(0,100)
y = x*2
z = x**2
###Output
_____no_output_____
###Markdown
** Import matplotlib.pyplot as plt and set %matplotlib inline if you are using the jupyter notebook. What command do you use if you aren't using the jupyter notebook?**
###Code
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Exercise 1** Follow along with these steps: *** ** Create a figure object called fig using plt.figure() *** ** Use add_axes to add an axis to the figure canvas at [0,0,1,1]. Call this new axis ax. *** ** Plot (x,y) on that axes and set the labels and titles to match the plot below:**
###Code
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y)
ax.set_xlabel('x') # Notice the use of set_ to begin methods
ax.set_ylabel('y')
ax.set_title('title')
plt.xlim( 0, 100 )
plt.ylim( 0, 200 )
###Output
_____no_output_____
###Markdown
Exercise 2** Create a figure object and put two axes on it, ax1 and ax2. Located at [0,0,1,1] and [0.2,0.5,.2,.2] respectively.**
###Code
fig = plt.figure()
ax1 = fig.add_axes([0,0,1,1]) # main axes
ax2 = fig.add_axes([0.2,0.5,.2,.2]) # inset axes
###Output
_____no_output_____
###Markdown
** Now plot (x,y) on both axes. And call your figure object to show it.**
###Code
fig = plt.figure()
ax1 = fig.add_axes([0,0,1,1]) # main axes
ax2 = fig.add_axes([0.2,0.5,.2,.2]) # inset axes
# Larger Figure Axes 1
ax1.plot(x, y, 'r')
ax1.set_xlabel('x')
ax1.set_ylabel('y')
plt.xlim( 0, 100 )
plt.ylim( 0, 200 )
# Insert Figure Axes 2
ax2.plot(x, y, 'r')
ax2.set_xlabel('x')
ax2.set_ylabel('y')
plt.xlim( 0, 100 )
plt.ylim( 0, 200 )
###Output
_____no_output_____
###Markdown
Exercise 3** Create the plot below by adding two axes to a figure object at [0,0,1,1] and [0.2,0.5,.4,.4]**
###Code
fig = plt.figure()
ax1 = fig.add_axes([0,0,1,1]) # main axes
ax2 = fig.add_axes([0.2,0.5,.4,.4]) # inset axes
###Output
_____no_output_____
###Markdown
** Now use x,y, and z arrays to recreate the plot below. Notice the xlimits and y limits on the inserted plot:**
###Code
fig = plt.figure()
# Larger Figure Axes 1
#x = np.linspace(0, 100, 201)
x = np.arange(100)
z = x ** 2
ax1 = fig.add_axes([0,0,1,1]) # main axes
plt.plot(x, z, 'b')
plt.xlabel('X')
plt.ylabel('Z')
plt.xlim( 0, 100 )
plt.ylim( 0, 10000 )
# Smaller Figure Axes 2
ax2 = fig.add_axes([0.2,0.5,.4,.4]) # inset axes
y = 2*x
plt.plot(x_zoom, y_zoom, 'b')
plt.xlabel('X')
plt.ylabel('Y')
ax2.set_title("zoom")
plt.xlim( 20, 22 )
plt.ylim( 30, 50 )
###Output
_____no_output_____
###Markdown
Exercise 4** Use plt.subplots(nrows=1, ncols=2) to create the plot below.**
###Code
fig, axes = plt.subplots(1, 2)
###Output
_____no_output_____
###Markdown
** Now plot (x,y) and (x,z) on the axes. Play around with the linewidth and style**
###Code
fig, axes = plt.subplots(1, 2)
x = np.arange(101)
# axes[0]
y1 = 2*x
axes[0].plot(x, y1,'b--')
axes[0].set_xlim( 0, 100 )
axes[0].set_ylim( 0, 200 )
# axes[1]
y2 = x ** 2
axes[1].plot(x, y2,'r-')
axes[1].set_xlim( 0, 100 )
axes[1].set_ylim( 0, 10000 )
###Output
_____no_output_____
###Markdown
** See if you can resize the plot by adding the figsize() argument in plt.subplots() are copying and pasting your previous code.**
###Code
fig, axes = plt.subplots(1, 2, figsize=(13,2))
x = np.arange(101)
# axes[0]
y1 = 2*x
axes[0].plot(x, y1,'b-')
axes[0].set_xlim( 0, 100 )
axes[0].set_ylim( 0, 200 )
# axes[1]
y2 = x ** 2
axes[1].plot(x, y2,'r--')
axes[1].set_xlim( 0, 100 )
axes[1].set_ylim( 0, 10000 )
###Output
_____no_output_____
|
Model Implement/ResNet152V2/ResNet152V2.ipynb
|
###Markdown
Import dataset
###Code
from google.colab import drive
import os
drive.mount('/content/GoogleDrive', force_remount=True)
path = '/content/GoogleDrive/My Drive/Vietnamese Foods'
os.chdir(path)
!ls
# Move dataset to /tmp cause reading files from Drive is very slow
!cp Dataset/vietnamese-foods-split.zip /tmp
!unzip -q /tmp/vietnamese-foods-split.zip -d /tmp
###Output
_____no_output_____
###Markdown
Check GPU working
###Code
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0': raise SystemError('GPU device not found')
print('Found GPU at:', device_name)
###Output
Found GPU at: /device:GPU:0
###Markdown
Setup path
###Code
TRAIN_PATH = '/tmp/Images/Train'
VALIDATE_PATH = '/tmp/Images/Validate'
TEST_PATH = '/tmp/Images/Test'
PATH = 'Models/ResNet152V2'
BASE_MODEL_BEST = os.path.join(PATH, 'base_model_best.hdf5')
BASE_MODEL_TRAINED = os.path.join(PATH, 'base_model_trained.hdf5')
BASE_MODEL_FIG = os.path.join(PATH, 'base_model_fig.jpg')
FINE_TUNE_MODEL_BEST = os.path.join(PATH, 'fine_tune_model_best.hdf5')
FINE_TUNE_MODEL_TRAINED = os.path.join(PATH, 'fine_tune_model_trained.hdf5')
FINE_TUNE_MODE_FIG = os.path.join(PATH, 'fine_tune_model_fig.jpg')
###Output
_____no_output_____
###Markdown
Preparing data
###Code
IMAGE_SIZE = (300, 300)
BATCH_SIZE = 128
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_generator = ImageDataGenerator(
rescale = 1./255,
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True
)
validate_generator = ImageDataGenerator(rescale=1./255)
test_generator = ImageDataGenerator(rescale=1./255)
generated_train_data = train_generator.flow_from_directory(TRAIN_PATH, target_size=IMAGE_SIZE, batch_size=BATCH_SIZE)
generated_validate_data = validate_generator.flow_from_directory(VALIDATE_PATH, target_size=IMAGE_SIZE, batch_size=BATCH_SIZE)
generated_test_data = test_generator.flow_from_directory(TEST_PATH, target_size=IMAGE_SIZE)
###Output
Found 17581 images belonging to 30 classes.
Found 2515 images belonging to 30 classes.
Found 5040 images belonging to 30 classes.
###Markdown
Model implement
###Code
CLASSES = 30
INITIAL_EPOCHS = 15
FINE_TUNE_EPOCHS = 15
TOTAL_EPOCHS = INITIAL_EPOCHS + FINE_TUNE_EPOCHS
FINE_TUNE_AT = 516
###Output
_____no_output_____
###Markdown
Define the model
###Code
from tensorflow.keras.applications.resnet_v2 import ResNet152V2
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense, Dropout
from tensorflow.keras.models import Model
pretrained_model = ResNet152V2(weights='imagenet', include_top=False)
last_output = pretrained_model.output
x = GlobalAveragePooling2D()(last_output)
x = Dense(512, activation='relu')(x)
x = Dropout(0.2)(x)
outputs = Dense(CLASSES, activation='softmax')(x)
model = Model(inputs=pretrained_model.input, outputs=outputs)
###Output
_____no_output_____
###Markdown
Callbacks
###Code
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
base_checkpointer = ModelCheckpoint(
filepath = BASE_MODEL_BEST,
save_best_only = True,
verbose = 1
)
fine_tune_checkpointer = ModelCheckpoint(
filepath = FINE_TUNE_MODEL_BEST,
save_best_only = True,
verbose = 1,
)
# Stop if no improvement after 3 epochs
early_stopping = EarlyStopping(monitor='val_loss', patience=3, verbose=1)
###Output
_____no_output_____
###Markdown
Stage 1: Transfer learning
###Code
for layer in pretrained_model.layers: layer.trainable = False
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(
generated_train_data,
validation_data = generated_validate_data,
validation_steps = generated_validate_data.n // BATCH_SIZE,
steps_per_epoch = generated_train_data.n // BATCH_SIZE,
callbacks = [base_checkpointer, early_stopping],
epochs = INITIAL_EPOCHS,
verbose = 1,
)
model.save(BASE_MODEL_TRAINED)
acc = history.history['accuracy']
loss = history.history['loss']
val_acc = history.history['val_accuracy']
val_loss = history.history['val_loss']
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()), 1])
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([min(plt.ylim()), max(plt.ylim())])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.savefig(BASE_MODEL_FIG)
plt.show()
###Output
_____no_output_____
###Markdown
Stage 2: Fine tuning
###Code
for layer in pretrained_model.layers[:FINE_TUNE_AT]: layer.trainable = False
for layer in pretrained_model.layers[FINE_TUNE_AT:]: layer.trainable = True
from tensorflow.keras.optimizers import SGD
model.compile(
optimizer = SGD(learning_rate=1e-4, momentum=0.9),
loss = 'categorical_crossentropy',
metrics = ['accuracy']
)
history_fine = model.fit(
generated_train_data,
validation_data = generated_validate_data,
validation_steps = generated_validate_data.n // BATCH_SIZE,
steps_per_epoch = generated_train_data.n // BATCH_SIZE,
epochs = TOTAL_EPOCHS,
initial_epoch = history.epoch[-1],
callbacks = [fine_tune_checkpointer, early_stopping],
verbose = 1,
)
model.save(FINE_TUNE_MODEL_TRAINED)
acc += history_fine.history['accuracy']
loss += history_fine.history['loss']
val_acc += history_fine.history['val_accuracy']
val_loss += history_fine.history['val_loss']
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.ylim([min(plt.ylim()), 1])
plt.plot([INITIAL_EPOCHS - 6, INITIAL_EPOCHS - 6], plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.ylim([min(plt.ylim()), max(plt.ylim())])
plt.plot([INITIAL_EPOCHS - 6, INITIAL_EPOCHS - 6], plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.savefig(FINE_TUNE_MODE_FIG)
plt.show()
###Output
_____no_output_____
###Markdown
Evaluation
###Code
loss, accuracy = model.evaluate(generated_test_data)
print('Test accuracy:', accuracy)
import gc
del model
gc.collect()
###Output
_____no_output_____
|
00-Python Object and Data Structure Basics/04.StringFormatting.ipynb
|
###Markdown
'String formatting'
###Code
'I can insert {} here and {} as a second word'.format('alpha','beta')
'Or I can change the order and I can insert {1} here and {0} as a second word'.format('alpha','beta')
#"We can also use variable and insert them directly like below"
"My name is {name} and my current role is {role} @ {company}".format(name='Murugan',role='TL',company='Infosys')
name="Murugan";
role='TL';
company='Infosys';
#This below method works only in 3.6 version of python
#print(f'My name is {name}')
name + ' ' + role
print("My name is %s and my current role is %s @ %r" %(name,role,company))
bigdecimalvalue=788347.8938934
floatvalue=100/333
print(floatvalue)
print("Flotin point with required decimal spaces like %1.4f" %(floatvalue))
print("Flotin point with required decimal spaces like %1.2f" %(bigdecimalvalue))
print("Flotin point with required decimal spaces like %21.6f" %(34.976)) #21. or 1. doesn't matters here, just adds space
print('{0:<8} | {1:^8} | {2:>8}'.format('Left','Center','Right'))
print('{0:<8} | {1:^8} | {2:>8}'.format(11,22,33))
print('{0:.<8} | {1:-^8} | {2:*>8}'.format(11,22,33))
###Output
Left | Center | Right
11 | 22 | 33
11...... | ---22--- | ******33
|
03 - SAS Software/03 - Exercises and Answers.ipynb
|
###Markdown
Workbook 3 - Exercises and AnswersAs previously, let's set up our environment first:
###Code
!pip install bumps
!pip install numpy
!pip install sasmodels
!pip install matplotlib
!git clone https://github.com/timsnow/advanced_sas_training_course
%cd 'advanced_sas_training_course/03 - SAS Software'
###Output
_____no_output_____
###Markdown
Now for the imports:
###Code
from numpy import loadtxt
from matplotlib import pyplot as plt
from DataFitter1D import DataFitter1D
###Output
_____no_output_____
###Markdown
And let's provide a few starting hints
###Code
data_location = 'data/CylinderData1D.dat'
parameter_location = 'parameters/fit_parameters.txt'
parameter_string = 'sasview_parameter_values:model_name,cylinder:scale,False,1.0,None,0.0,inf,():background,False,0.001,None,-inf,inf,():sld,False,4,None,-inf,inf,():sld_solvent,False,1,None,-inf,inf,():radius,True,10,None,1.0,50.0,():length,True,400,None,1.0,2000.0,():is_data,False:tab_index,1:is_batch_fitting,False:data_name,[]:data_id,[]:tab_name,M1:q_range_min,0.0005:q_range_max,0.5:q_weighting,0:weighting,0:smearing,0:smearing_min,None:smearing_max,None:polydisperse_params,False:magnetic_params,False:chainfit_params,False:2D_params,False:fitpage_category,Cylinder:fitpage_model,cylinder:fitpage_structure,None:'
data_fitter = DataFitter1D()
###Output
_____no_output_____
###Markdown
Load in the `x` and `y` data from the `CylinderData1D.dat` file and then load this into the `data_fitter` object:
###Code
overall_data = loadtxt(data_location)
x_data = overall_data[:,0]
y_data = overall_data[:,1]
data_fitter.loadData(xData = x_data, yData = y_data)
###Output
_____no_output_____
###Markdown
Now, using your choice of either the fit parameters file, or the fit parameters string, load the fitting parameters into the `data_fitter` object:
###Code
data_fitter.parameterParserFromTextFile(parameter_location)
# Or
data_fitter.parameterParserFromString('sasview_parameter_values:model_name,cylinder:scale,False,1.0,None,0.0,inf,():background,False,0.001,None,-inf,inf,():sld,False,4,None,-inf,inf,():sld_solvent,False,1,None,-inf,inf,():radius,True,10,None,1.0,50.0,():length,True,400,None,1.0,2000.0,():is_data,False:tab_index,1:is_batch_fitting,False:data_name,[]:data_id,[]:tab_name,M1:q_range_min,0.0005:q_range_max,0.5:q_weighting,0:weighting,0:smearing,0:smearing_min,None:smearing_max,None:polydisperse_params,False:magnetic_params,False:chainfit_params,False:2D_params,False:fitpage_category,Cylinder:fitpage_model,cylinder:fitpage_structure,None:')
###Output
_____no_output_____
###Markdown
Fit and plot the data:
###Code
data_fitter.fitData()
plt.plot(x_data, y_data, label = 'Data')
plt.plot(data_fitter.dataHolder.x, data_fitter.fittingProblem.fitness.theory(), label = 'Fit')
plt.yscale('log')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Either append key fitting parameters on to a plot or print out a fitting report:
###Code
model_type = data_fitter.fittingProblem.fitness.model.sasmodel.info.id.capitalize()
model_radius = '{:.2f}'.format(data_fitter.fittingProblem.fitness.model.radius.value)
model_length = '{:.2f}'.format(data_fitter.fittingProblem.fitness.model.length.value)
plt.figure('Fitted Data Plot')
plt.title(r'Fitted length = ' + (model_length) + ' Å | Fitted radius = ' + model_radius + ' Å\n' + model_type + ' model')
plt.plot(x_data, y_data, label = 'Data')
plt.plot(data_fitter.dataHolder.x, data_fitter.fittingProblem.fitness.theory(), label = 'Fit')
plt.yscale('log')
plt.legend()
plt.show()
# Or
print('Fitting report for ' + data_fitter.fittingProblem.fitness.model.sasmodel.info.id.capitalize())
for key, value in data_fitter.fittingProblem.fitness.model.parameters().items():
print(key + ': ' + str(value.value))
###Output
_____no_output_____
###Markdown
Now, encapsulate the code above into a single function definition that's able to take both input data and fitting parameters *e.g.* def my_great_fit_function(data_file, fitting_parameters):
###Code
def fit_and_report_function(data_file_path, parameter_file_path):
data_fitter = DataFitter1D()
overall_data = loadtxt(data_file_path)
x_data = overall_data[:,0]
y_data = overall_data[:,1]
data_fitter.loadData(xData = x_data, yData = y_data)
data_fitter.parameterParserFromTextFile(parameter_file_path)
data_fitter.fitData()
model_type = data_fitter.fittingProblem.fitness.model.sasmodel.info.id.capitalize()
model_radius = '{:.2f}'.format(data_fitter.fittingProblem.fitness.model.radius.value)
model_length = '{:.2f}'.format(data_fitter.fittingProblem.fitness.model.length.value)
plt.figure('Fitted Data Plot')
plt.title(r'Fitted length = ' + (model_length) + ' Å | Fitted radius = ' + model_radius + ' Å\n' + model_type + ' model')
plt.plot(x_data, y_data, label = 'Data')
plt.plot(data_fitter.dataHolder.x, data_fitter.fittingProblem.fitness.theory(), label = 'Fit')
plt.yscale('log')
plt.legend()
plt.show()
print('Fitting report for ' + data_fitter.fittingProblem.fitness.model.sasmodel.info.id.capitalize())
for key, value in data_fitter.fittingProblem.fitness.model.parameters().items():
print(key + ': ' + str(value.value))
###Output
_____no_output_____
###Markdown
Create a list of inputs (use a `for` loop to append the same file name and fitting parameters *n* times) and then iterate over this list to perform fits on this dataset *n* times**Hint:** lists can be lists of lists: [[..., ...], [..., ...], ...]
###Code
small_list = [data_location, parameter_location]
bigger_list = []
for iteration in range(10):
bigger_list.append(small_list)
for sub_entry in bigger_list:
fit_and_report_function(sub_entry[0], sub_entry[1])
###Output
_____no_output_____
|
notebooks/Basic example 3--resampling a DES cluster lensing chain.ipynb
|
###Markdown
Resample a DES cluster lensing chain with two parametersIn this example, we will read in a DES Year 1 cluster weak lensing chain with two parameters ($\log_{10}M$,$c$) and build an importance sampler for it. We will then resample it and try to recover (essentially) the exact same chain.
###Code
#Import things
import numpy as np
import matplotlib.pyplot as plt
import importance_sampler as isamp
import chainconsumer as CC
import emcee #for doing MCMC
%matplotlib inline
#Plot formatting
plt.rc("font", size=18, family="serif")
plt.rc("text", usetex=True)
#Read in the chain and remove burn-in (which I only know is there for this example)
input_chain = np.loadtxt("DES_RMWL_Mc_chainz0l3.txt")[32*1000:]
lnpost = np.loadtxt("DES_RMWL_Mc_likesz0l3.txt")[32*1000:]
print("chain shape is ", input_chain.shape)
print("lnpost shape is ", lnpost.shape)
#Pick out training points
N_training = 200
IS = isamp.ImportanceSampler(input_chain, lnpost, scale = 8)
IS.select_training_points(N_training, method="LH")
#Visualize the training points selected against the chain
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(3,3))
plt.subplots_adjust(wspace=0.6)
ax.scatter(input_chain[:,0], input_chain[:,1], c='b', s=0.5, alpha=0.2)
points,_ = IS.get_training_data()
ax.scatter(points[:,0], points[:,1], c='k', s=8)
#Train the GP inside of the sampler
IS.train()
#Set up an MCMC object and run
means = np.mean(input_chain,0)
nwalkers = 32
ndim = len(input_chain[0])
sampler = emcee.EnsembleSampler(nwalkers, ndim, IS.predict)
print("Running first burn-in")
p0 = np.array([means + means*1e-3*np.random.randn(ndim) for i in range(nwalkers)])
p0, lp, _ = sampler.run_mcmc(p0, 1000)
print("Running second burn-in")
p0 = p0[np.argmax(lp)] + p0[np.argmax(lp)]*1e-4*np.random.randn(nwalkers, ndim)
p0, lp, _ = sampler.run_mcmc(p0, 1000)
sampler.reset()
print("Running production...")
sampler.run_mcmc(p0, 5000);
test_chain = sampler.flatchain
print("Means and stds of input chain: ", np.mean(input_chain, 0), np.std(input_chain, 0))
print("Means and stds of test chain: ", np.mean(test_chain, 0), np.std(test_chain, 0))
c = CC.ChainConsumer()
c.add_chain(input_chain, parameters=[r"$\log_{10}M_{\rm 200b}$", r"$c_{\rm 200b}$"], name="Input chain")
c.add_chain(test_chain, parameters=[r"$\log_{10}M_{\rm 200b}$", r"$c_{\rm 200b}$"], name="Resampled chain")
fig = c.plotter.plot()
#fig.savefig("cluster_lensing_example.png", dpi=300, bbox_inches="tight")
###Output
_____no_output_____
|
V7/v7_exercises_material/1_PySpark.ipynb
|
###Markdown
CopyrightThis exercise comes from [pnavaro](https://github.com/pnavaro/big-data) and has been modified to work in our environment. PySpark - [Apache Spark](https://spark.apache.org) was first released in 2014. - It was originally developed by [Matei Zaharia](http://people.csail.mit.edu/matei) as a class project, and later a PhD dissertation, at University of California, Berkeley.- Spark is written in [Scala](https://www.scala-lang.org).- All images come from [Databricks](https://databricks.com/product/getting-started-guide). - Apache Spark is a fast and general-purpose cluster computing system. - It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs.- Spark can manage "big data" collections with a small set of high-level primitives like `map`, `filter`, `groupby`, and `join`. With these common patterns we can often handle computations that are more complex than map, but are still structured.- It also supports a rich set of higher-level tools including [Spark SQL](https://spark.apache.org/docs/latest/sql-programming-guide.html) for SQL and structured data processing, [MLlib](https://spark.apache.org/docs/latest/ml-guide.html) for machine learning, [GraphX](https://spark.apache.org/docs/latest/graphx-programming-guide.html) for graph processing, and Spark Streaming. Resilient distributed datasets- The fundamental abstraction of Apache Spark is a read-only, parallel, distributed, fault-tolerent collection called a resilient distributed datasets (RDD).- RDDs behave a bit like Python collections (e.g. lists).- When working with Apache Spark we iteratively apply functions to every item of these collections in parallel to produce *new* RDDs.- The data is distributed across nodes in a cluster of computers.- Functions implemented in Spark can work in parallel across elements of the collection.- The Spark framework allocates data and processing to different nodes, without any intervention from the programmer.- RDDs automatically rebuilt on machine failure. Lifecycle of a Spark Program1. Create some input RDDs from external data or parallelize a collection in your driver program.2. Lazily transform them to define new RDDs using transformations like `filter()` or `map()`3. Ask Spark to cache() any intermediate RDDs that will need to be reused.4. Launch actions such as count() and collect() to kick off a parallel computation, which is then optimized and executed by Spark. Operations on Distributed Data- Two types of operations: **transformations** and **actions**- Transformations are *lazy* (not computed immediately) - Transformations are executed when an action is run [Transformations](https://spark.apache.org/docs/latest/rdd-programming-guide.htmltransformations) (lazy)```map() flatMap()filter() mapPartitions() mapPartitionsWithIndex() sample()union() intersection() distinct()groupBy() groupByKey()reduceBy() reduceByKey()sortBy() sortByKey()join()cogroup()cartesian()pipe()coalesce()repartition()partitionBy()...``` [Actions](https://spark.apache.org/docs/latest/rdd-programming-guide.htmlactions)```reduce()collect()count()first()take()takeSample()saveToCassandra()takeOrdered()saveAsTextFile()saveAsSequenceFile()saveAsObjectFile()countByKey()foreach()``` Python APIPySpark uses Py4J that enables Python programs to dynamically access Java objects. The `SparkContext` class- When working with Apache Spark we invoke methods on an object which is an instance of the `pyspark.SparkContext` context.- Typically, an instance of this object will be created automatically for you and assigned to the variable `sc`.- The `parallelize` method in `SparkContext` can be used to turn any ordinary Python collection into an RDD; - normally we would create an RDD from a large file or an HBase table. First examplePySpark isn't on sys.path by default, but that doesn't mean it can't be used as a regular library. You can address this by either symlinking pyspark into your site-packages, or adding pyspark to sys.path at runtime. [findspark](https://github.com/minrk/findspark) does the latter.We have a spark context sc to use with a tiny local spark cluster with 4 nodes (will work just fine on a multicore machine).
###Code
import findspark
import pyspark
findspark.init()
sc = pyspark.SparkContext(appName="Exercise Spark RDD")
print(sc) # it is like a Pool Processor executor
###Output
_____no_output_____
###Markdown
Create your first RDD
###Code
data = list(range(8))
rdd = sc.parallelize(data) # create collection
rdd
import os
path = os.getcwd()
print(path)
###Output
_____no_output_____
###Markdown
ExerciseCreate a file `sample.txt`with lorem package. Read and load it into a RDD with the `textFile` spark function. note from frederick egli: If you get "no module named 'faker'", you need to install it.. try to search for the library here: https://pypi.org/
###Code
from faker import Faker
fake = Faker()
Faker.seed(0)
with open("sample.txt","w") as f:
f.write(fake.text(max_nb_chars=1000))
rdd = sc.textFile(f"file:///{path}/sample.txt")
###Output
_____no_output_____
###Markdown
CollectAction / To Driver: Return all items in the RDD to the driver in a single listSource: https://i.imgur.com/DUO6ygB.png Exercise Collect the text you read before from the `sample.txt`file.
###Code
# your spark code ...
###Output
_____no_output_____
###Markdown
MapTransformation / Narrow: Return a new RDD by applying a function to each element of this RDDSource: http://i.imgur.com/PxNJf0U.png
###Code
rdd = sc.parallelize(list(range(8)))
rdd.map(lambda x: x ** 2).collect() # Square each element
###Output
_____no_output_____
###Markdown
ExerciseReplace the lambda function by a function that contains a pause (sleep(1)) and check if the `map` operation is parallelized.
###Code
import time
# time.sleep(1) sleeeps one second
# your spark code ...
###Output
_____no_output_____
###Markdown
FilterTransformation / Narrow: Return a new RDD containing only the elements that satisfy a predicateSource: http://i.imgur.com/GFyji4U.png
###Code
# Select only the even elements
rdd.filter(lambda x: x % 2 == 0).collect()
###Output
_____no_output_____
###Markdown
FlatMapTransformation / Narrow: Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results
###Code
rdd = sc.parallelize([1,2,3])
rdd.flatMap(lambda x: (x, x*100, 42)).collect()
###Output
_____no_output_____
###Markdown
ExerciseUse FlatMap to clean the text from `sample.txt`file. Lower, remove dots and split into words. GroupByTransformation / Wide: Group the data in the original RDD. Create pairs where the key is the output of a user function, and the value is all items for which the function yields this key.
###Code
rdd = sc.parallelize(['John', 'Fred', 'Anna', 'James'])
rdd = rdd.groupBy(lambda w: w[0])
[(k, list(v)) for (k, v) in rdd.collect()]
###Output
_____no_output_____
###Markdown
GroupByKeyTransformation / Wide: Group the values for each key in the original RDD. Create a new pair where the original key corresponds to this collected group of values.
###Code
rdd = sc.parallelize([('B',5),('B',4),('A',3),('A',2),('A',1)])
rdd = rdd.groupByKey()
[(j[0], list(j[1])) for j in rdd.collect()]
###Output
_____no_output_____
###Markdown
JoinTransformation / Wide: Return a new RDD containing all pairs of elements having the same key in the original RDDs
###Code
x = sc.parallelize([("a", 1), ("b", 2)])
y = sc.parallelize([("a", 3), ("a", 4), ("b", 5)])
x.join(y).collect()
###Output
_____no_output_____
###Markdown
DistinctTransformation / Wide: Return a new RDD containing distinct items from the original RDD (omitting all duplicates)
###Code
rdd = sc.parallelize([1,2,3,3,4])
rdd.distinct().collect()
###Output
_____no_output_____
###Markdown
KeyByTransformation / Narrow: Create a Pair RDD, forming one pair for each item in the original RDD. The pair’s key is calculated from the value via a user-supplied function.
###Code
rdd = sc.parallelize(['John', 'Fred', 'Anna', 'James'])
rdd.keyBy(lambda w: w[0]).collect()
###Output
_____no_output_____
###Markdown
Actions Map-Reduce operation Action / To Driver: Aggregate all the elements of the RDD by applying a user function pairwise to elements and partial results, and return a result to the driver
###Code
from operator import add
rdd = sc.parallelize(list(range(8)))
rdd.map(lambda x: x ** 2).reduce(add) # reduce is an action!
###Output
_____no_output_____
###Markdown
Max, Min, Sum, Mean, Variance, StdevAction / To Driver: Compute the respective function (maximum value, minimum value, sum, mean, variance, or standard deviation) from a numeric RDD CountByKeyAction / To Driver: Return a map of keys and counts of their occurrences in the RDD
###Code
rdd = sc.parallelize([('J', 'James'), ('F','Fred'),
('A','Anna'), ('J','John')])
rdd.countByKey()
###Output
_____no_output_____
###Markdown
Stop the Local Spark Cluster
###Code
sc.stop()
###Output
_____no_output_____
|
how-to-guides/external-references.ipynb
|
###Markdown
 External References In addition to opening existing Dataflows in code and modifying them, it is also possible to create and persist Dataflows that reference another Dataflow that has been persisted to a .dprep file. In this case, executing this Dataflow will load and execute the referenced Dataflow dynamically, and then execute the steps in the referencing Dataflow. To demonstrate, we will create a Dataflow that loads and transforms some data. After that, we will persist this Dataflow to disk. To learn more about saving and opening .dprep files, see: [Opening and Saving Dataflows](./open-save-dataflows.ipynb)
###Code
import azureml.dataprep as dprep
import tempfile
import os
dflow = dprep.auto_read_file('../data/crime.txt')
dflow = dflow.drop_errors(['Column7', 'Column8', 'Column9'], dprep.ColumnRelationship.ANY)
dflow_path = os.path.join(tempfile.gettempdir(), 'package.dprep')
dflow.save(dflow_path)
###Output
_____no_output_____
###Markdown
Now that we have a .dprep file, we can create a new Dataflow that references it.
###Code
dflow_new = dprep.Dataflow.reference(dprep.ExternalReference(dflow_path))
dflow_new.head(5)
###Output
_____no_output_____
###Markdown
When executed, the new Dataflow returns the same results as the one we saved to the .dprep file. Since this reference is resolved on execution, updating the referenced Dataflow results in the changes being visible when re-executing the referencing Dataflow.
###Code
dflow = dflow.take(5)
dflow.save(dflow_path)
dflow_new.head(10)
###Output
_____no_output_____
###Markdown
External ReferencesCopyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. In addition to opening existing Dataflows in code and modifying them, it is also possible to create and persist Dataflows that reference another Dataflow that has been persisted to a .dprep file. In this case, executing this Dataflow will load and execute the referenced Dataflow dynamically, and then execute the steps in the referencing Dataflow. To demonstrate, we will create a Dataflow that loads and transforms some data. After that, we will persist this Dataflow to disk. To learn more about saving and opening .dprep files, see: [Opening and Saving Dataflows](./open-save-dataflows.ipynb)
###Code
import azureml.dataprep as dprep
import tempfile
import os
dflow = dprep.auto_read_file('../data/crime.txt')
dflow = dflow.drop_errors(['Column7', 'Column8', 'Column9'], dprep.ColumnRelationship.ANY)
dflow_path = os.path.join(tempfile.gettempdir(), 'package.dprep')
dflow.save(dflow_path)
###Output
_____no_output_____
###Markdown
Now that we have a .dprep file, we can create a new Dataflow that references it.
###Code
dflow_new = dprep.Dataflow.reference(dprep.ExternalReference(dflow_path))
dflow_new.head(5)
###Output
_____no_output_____
###Markdown
When executed, the new Dataflow returns the same results as the one we saved to the .dprep file. Since this reference is resolved on execution, updating the referenced Dataflow results in the changes being visible when re-executing the referencing Dataflow.
###Code
dflow = dflow.take(5)
dflow.save(dflow_path)
dflow_new.head(10)
###Output
_____no_output_____
|
module06_mnist_batchnorm.ipynb
|
###Markdown
 
###Code
import torch
import random
import numpy as np
random.seed(0)
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.backends.cudnn.deterministic = True
import torchvision.datasets
MNIST_train = torchvision.datasets.MNIST('./', download=True, train=True)
MNIST_test = torchvision.datasets.MNIST('./', download=True, train=False)
X_train = MNIST_train.train_data
y_train = MNIST_train.train_labels
X_test = MNIST_test.test_data
y_test = MNIST_test.test_labels
len(y_train), len(y_test)
import matplotlib.pyplot as plt
plt.imshow(X_train[0, :, :])
plt.show()
print(y_train[0])
X_train = X_train.unsqueeze(1).float()
X_test = X_test.unsqueeze(1).float()
X_train.shape
class LeNet5(torch.nn.Module):
def __init__(self,
activation='tanh',
pooling='avg',
conv_size=5,
use_batch_norm=False):
super(LeNet5, self).__init__()
self.conv_size = conv_size
self.use_batch_norm = use_batch_norm
if activation == 'tanh':
activation_function = torch.nn.Tanh()
elif activation == 'relu':
activation_function = torch.nn.ReLU()
else:
raise NotImplementedError
if pooling == 'avg':
pooling_layer = torch.nn.AvgPool2d(kernel_size=2, stride=2)
elif pooling == 'max':
pooling_layer = torch.nn.MaxPool2d(kernel_size=2, stride=2)
else:
raise NotImplementedError
if conv_size == 5:
self.conv1 = torch.nn.Conv2d(
in_channels=1, out_channels=6, kernel_size=5, padding=2)
elif conv_size == 3:
self.conv1_1 = torch.nn.Conv2d(
in_channels=1, out_channels=6, kernel_size=3, padding=1)
self.conv1_2 = torch.nn.Conv2d(
in_channels=6, out_channels=6, kernel_size=3, padding=1)
else:
raise NotImplementedError
self.act1 = activation_function
self.bn1 = torch.nn.BatchNorm2d(num_features=6)
self.pool1 = pooling_layer
if conv_size == 5:
self.conv2 = self.conv2 = torch.nn.Conv2d(
in_channels=6, out_channels=16, kernel_size=5, padding=0)
elif conv_size == 3:
self.conv2_1 = torch.nn.Conv2d(
in_channels=6, out_channels=16, kernel_size=3, padding=0)
self.conv2_2 = torch.nn.Conv2d(
in_channels=16, out_channels=16, kernel_size=3, padding=0)
else:
raise NotImplementedError
self.act2 = activation_function
self.bn2 = torch.nn.BatchNorm2d(num_features=16)
self.pool2 = pooling_layer
self.fc1 = torch.nn.Linear(5 * 5 * 16, 120)
self.act3 = activation_function
self.fc2 = torch.nn.Linear(120, 84)
self.act4 = activation_function
self.fc3 = torch.nn.Linear(84, 10)
def forward(self, x):
if self.conv_size == 5:
x = self.conv1(x)
elif self.conv_size == 3:
x = self.conv1_2(self.conv1_1(x))
x = self.act1(x)
if self.use_batch_norm:
x = self.bn1(x)
x = self.pool1(x)
if self.conv_size == 5:
x = self.conv2(x)
elif self.conv_size == 3:
x = self.conv2_2(self.conv2_1(x))
x = self.act2(x)
if self.use_batch_norm:
x = self.bn2(x)
x = self.pool2(x)
x = x.view(x.size(0), x.size(1) * x.size(2) * x.size(3))
x = self.fc1(x)
x = self.act3(x)
x = self.fc2(x)
x = self.act4(x)
x = self.fc3(x)
return x
def train(net, X_train, y_train, X_test, y_test):
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
net = net.to(device)
loss = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=1.0e-3)
batch_size = 100
test_accuracy_history = []
test_loss_history = []
X_test = X_test.to(device)
y_test = y_test.to(device)
for epoch in range(30):
order = np.random.permutation(len(X_train))
for start_index in range(0, len(X_train), batch_size):
optimizer.zero_grad()
net.train()
batch_indexes = order[start_index:start_index+batch_size]
X_batch = X_train[batch_indexes].to(device)
y_batch = y_train[batch_indexes].to(device)
preds = net.forward(X_batch)
loss_value = loss(preds, y_batch)
loss_value.backward()
optimizer.step()
net.eval()
test_preds = net.forward(X_test)
test_loss_history.append(loss(test_preds, y_test).data.cpu())
accuracy = (test_preds.argmax(dim=1) == y_test).float().mean().data.cpu()
test_accuracy_history.append(accuracy)
print(accuracy)
print('---------------')
return test_accuracy_history, test_loss_history
accuracies = {}
losses = {}
accuracies['tanh'], losses['tanh'] = \
train(LeNet5(activation='tanh', conv_size=5),
X_train, y_train, X_test, y_test)
accuracies['relu'], losses['relu'] = \
train(LeNet5(activation='relu', conv_size=5),
X_train, y_train, X_test, y_test)
accuracies['relu_3'], losses['relu_3'] = \
train(LeNet5(activation='relu', conv_size=3),
X_train, y_train, X_test, y_test)
accuracies['relu_3_max_pool'], losses['relu_3_max_pool'] = \
train(LeNet5(activation='relu', conv_size=3, pooling='max'),
X_train, y_train, X_test, y_test)
accuracies['relu_3_max_pool_bn'], losses['relu_3_max_pool_bn'] = \
train(LeNet5(activation='relu', conv_size=3, pooling='max', use_batch_norm=True),
X_train, y_train, X_test, y_test)
for experiment_id in accuracies.keys():
plt.plot(accuracies[experiment_id], label=experiment_id)
plt.legend()
plt.title('Validation Accuracy');
for experiment_id in losses.keys():
plt.plot(losses[experiment_id], label=experiment_id)
plt.legend()
plt.title('Validation Loss');
import torch
import numpy as np
seed = int(input())
np.random.seed(seed)
torch.manual_seed(seed)
NUMBER_OF_EXPERIMENTS = 200
class SimpleNet(torch.nn.Module):
def __init__(self, activation):
super().__init__()
self.activation = activation
self.fc1 = torch.nn.Linear(1, 1, bias=False) # one neuron without bias
self.fc1.weight.data.fill_(1.) # init weight with 1
self.fc2 = torch.nn.Linear(1, 1, bias=False)
self.fc2.weight.data.fill_(1.)
self.fc3 = torch.nn.Linear(1, 1, bias=False)
self.fc3.weight.data.fill_(1.)
def forward(self, x):
x = self.activation(self.fc1(x))
x = self.activation(self.fc2(x))
x = self.activation(self.fc3(x))
return x
def get_fc1_grad_abs_value(self):
return torch.abs(self.fc1.weight.grad)
def get_fc1_grad_abs_value(net, x):
output = net.forward(x)
output.backward() # no loss function. Pretending that we want to minimize output
# In our case output is scalar, so we can calculate backward
fc1_grad = net.get_fc1_grad_abs_value().item()
net.zero_grad()
return fc1_grad
# activation = torch.nn.Hardshrink()
# net = SimpleNet(activation=activation)
# fc1_grads = []
# for x in torch.randn((NUMBER_OF_EXPERIMENTS, 1)):
# fc1_grads.append(get_fc1_grad_abs_value(net, x))
activation = {'ELU': torch.nn.ELU(), 'Hardtanh': torch.nn.Hardtanh(),
'LeakyReLU': torch.nn.LeakyReLU(), 'LogSigmoid': torch.nn.LogSigmoid(),
'PReLU': torch.nn.PReLU(), 'ReLU': torch.nn.ReLU(), 'ReLU6': torch.nn.ReLU6(),
'RReLU': torch.nn.RReLU(), 'SELU': torch.nn.SELU(), 'CELU': torch.nn.CELU(),
'Sigmoid': torch.nn.Sigmoid(), 'Softplus': torch.nn.Softplus(),
'Softshrink': torch.nn.Softshrink(), 'Softsign': torch.nn.Softsign(),
'Tanh': torch.nn.Tanh(), 'Tanhshrink': torch.nn.Tanhshrink(),
'Hardshrink': torch.nn.Hardshrink()}
for key, val in activation.items():
net = SimpleNet(activation=val)
fc1_grads = []
for x in torch.randn((NUMBER_OF_EXPERIMENTS, 1)):
fc1_grads.append(get_fc1_grad_abs_value(net, x))
# Проверка осуществляется автоматически, вызовом функции:
print(key, ' ', np.mean(fc1_grads))
# (раскомментируйте, если решаете задачу локально)
import math
def ReLU(x):
return max(0, x)
def dReLU(x):
if x > 0:
return 1
return 0
# Tanh activation
t1 =round((1 - math.tanh(math.tanh(math.tanh(math.tanh(100)))) ** 2) * (1 - math.tanh(math.tanh(math.tanh(100))) ** 2)
* (1 - math.tanh(math.tanh(100)) ** 2) * (1 - math.tanh(100) ** 2) * 100, 3)
t2 = round((1 - math.tanh(math.tanh(math.tanh(math.tanh(100)))) ** 2) * (1 - math.tanh(math.tanh(math.tanh(100))) ** 2)
* (1 - math.tanh(math.tanh(100)) ** 2) * math.tanh(100), 3)
t3 = round((1 - math.tanh(math.tanh(math.tanh(math.tanh(100)))) ** 2) * (1 - math.tanh(math.tanh(math.tanh(100))) ** 2)
* math.tanh(math.tanh(100)), 3)
t4 = round((1 - math.tanh(math.tanh(math.tanh(math.tanh(100)))) ** 2) * math.tanh(math.tanh(math.tanh(100))), 3)
# ReLU activation
r1 = round(dReLU(ReLU(ReLU(ReLU(100)))) * dReLU(ReLU(ReLU(100))) * dReLU(ReLU(100)) * dReLU(100) * 100, 3)
r2 = round(dReLU(ReLU(ReLU(ReLU(100)))) * dReLU(ReLU(ReLU(100))) * dReLU(ReLU(100)) * ReLU(100), 3)
r3 = round(dReLU(ReLU(ReLU(ReLU(100)))) * dReLU(ReLU(ReLU(100))) * ReLU(ReLU(100)), 3)
r4 = round(dReLU(ReLU(ReLU(ReLU(100)))) * ReLU(ReLU(ReLU(100))), 3)
answer1, answer2 = [t1, t2, t3, t4], [r1, r2, r3, r4]
print(answer1, answer2, sep=',')
###Output
[0.0, 0.168, 0.304, 0.436],[100, 100, 100, 100]
###Markdown
 
###Code
import torch
import random
import numpy as np
random.seed(0)
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.backends.cudnn.deterministic = True
import torchvision.datasets
MNIST_train = torchvision.datasets.MNIST('./', download=True, train=True)
MNIST_test = torchvision.datasets.MNIST('./', download=True, train=False)
X_train = MNIST_train.train_data
y_train = MNIST_train.train_labels
X_test = MNIST_test.test_data
y_test = MNIST_test.test_labels
len(y_train), len(y_test)
import matplotlib.pyplot as plt
plt.imshow(X_train[0, :, :])
plt.show()
print(y_train[0])
X_train = X_train.unsqueeze(1).float()
X_test = X_test.unsqueeze(1).float()
X_train.shape
class LeNet5(torch.nn.Module):
def __init__(self,
activation='tanh',
pooling='avg',
conv_size=5,
use_batch_norm=False):
super(LeNet5, self).__init__()
self.conv_size = conv_size
self.use_batch_norm = use_batch_norm
if activation == 'tanh':
activation_function = torch.nn.Tanh()
elif activation == 'relu':
activation_function = torch.nn.ReLU()
else:
raise NotImplementedError
if pooling == 'avg':
pooling_layer = torch.nn.AvgPool2d(kernel_size=2, stride=2)
elif pooling == 'max':
pooling_layer = torch.nn.MaxPool2d(kernel_size=2, stride=2)
else:
raise NotImplementedError
if conv_size == 5:
self.conv1 = torch.nn.Conv2d(
in_channels=1, out_channels=6, kernel_size=5, padding=2)
elif conv_size == 3:
self.conv1_1 = torch.nn.Conv2d(
in_channels=1, out_channels=6, kernel_size=3, padding=1)
self.conv1_2 = torch.nn.Conv2d(
in_channels=6, out_channels=6, kernel_size=3, padding=1)
else:
raise NotImplementedError
self.act1 = activation_function
self.bn1 = torch.nn.BatchNorm2d(num_features=6)
self.pool1 = pooling_layer
if conv_size == 5:
self.conv2 = self.conv2 = torch.nn.Conv2d(
in_channels=6, out_channels=16, kernel_size=5, padding=0)
elif conv_size == 3:
self.conv2_1 = torch.nn.Conv2d(
in_channels=6, out_channels=16, kernel_size=3, padding=0)
self.conv2_2 = torch.nn.Conv2d(
in_channels=16, out_channels=16, kernel_size=3, padding=0)
else:
raise NotImplementedError
self.act2 = activation_function
self.bn2 = torch.nn.BatchNorm2d(num_features=16)
self.pool2 = pooling_layer
self.fc1 = torch.nn.Linear(5 * 5 * 16, 120)
self.act3 = activation_function
self.fc2 = torch.nn.Linear(120, 84)
self.act4 = activation_function
self.fc3 = torch.nn.Linear(84, 10)
def forward(self, x):
if self.conv_size == 5:
x = self.conv1(x)
elif self.conv_size == 3:
x = self.conv1_2(self.conv1_1(x))
x = self.act1(x)
if self.use_batch_norm:
x = self.bn1(x)
x = self.pool1(x)
if self.conv_size == 5:
x = self.conv2(x)
elif self.conv_size == 3:
x = self.conv2_2(self.conv2_1(x))
x = self.act2(x)
if self.use_batch_norm:
x = self.bn2(x)
x = self.pool2(x)
x = x.view(x.size(0), x.size(1) * x.size(2) * x.size(3))
x = self.fc1(x)
x = self.act3(x)
x = self.fc2(x)
x = self.act4(x)
x = self.fc3(x)
return x
def train(net, X_train, y_train, X_test, y_test):
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
net = net.to(device)
loss = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=1.0e-3)
batch_size = 100
test_accuracy_history = []
test_loss_history = []
X_test = X_test.to(device)
y_test = y_test.to(device)
for epoch in range(30):
order = np.random.permutation(len(X_train))
for start_index in range(0, len(X_train), batch_size):
optimizer.zero_grad()
net.train()
batch_indexes = order[start_index:start_index+batch_size]
X_batch = X_train[batch_indexes].to(device)
y_batch = y_train[batch_indexes].to(device)
preds = net.forward(X_batch)
loss_value = loss(preds, y_batch)
loss_value.backward()
optimizer.step()
net.eval()
test_preds = net.forward(X_test)
test_loss_history.append(loss(test_preds, y_test).data.cpu())
accuracy = (test_preds.argmax(dim=1) == y_test).float().mean().data.cpu()
test_accuracy_history.append(accuracy)
print(accuracy)
print('---------------')
return test_accuracy_history, test_loss_history
accuracies = {}
losses = {}
accuracies['tanh'], losses['tanh'] = \
train(LeNet5(activation='tanh', conv_size=5),
X_train, y_train, X_test, y_test)
accuracies['relu'], losses['relu'] = \
train(LeNet5(activation='relu', conv_size=5),
X_train, y_train, X_test, y_test)
accuracies['relu_3'], losses['relu_3'] = \
train(LeNet5(activation='relu', conv_size=3),
X_train, y_train, X_test, y_test)
accuracies['relu_3_max_pool'], losses['relu_3_max_pool'] = \
train(LeNet5(activation='relu', conv_size=3, pooling='max'),
X_train, y_train, X_test, y_test)
accuracies['relu_3_max_pool_bn'], losses['relu_3_max_pool_bn'] = \
train(LeNet5(activation='relu', conv_size=3, pooling='max', use_batch_norm=True),
X_train, y_train, X_test, y_test)
for experiment_id in accuracies.keys():
plt.plot(accuracies[experiment_id], label=experiment_id)
plt.legend()
plt.title('Validation Accuracy');
for experiment_id in losses.keys():
plt.plot(losses[experiment_id], label=experiment_id)
plt.legend()
plt.title('Validation Loss');
###Output
_____no_output_____
|
Models/Semantic Models/Ranker.ipynb
|
###Markdown
Importing packages
###Code
# Install required packages for Albert model
!pip install -q sentencepiece
!pip install -q transformers
!pip install -q tokenizers
!pip install -qU hazm
!pip install -qU clean-text[gpl]
#!pip install git+https://github.com/LIAAD/yake
!pip install rank_bm25
!pip install -qU sentence-transformers
!pip install -qU wikipedia-api
!pip install rank_bm25
!mkdir resources
!wget -q "https://github.com/sobhe/hazm/releases/download/v0.5/resources-0.5.zip" -P resources
!unzip -qq resources/resources-0.5.zip -d resources
!pip install faiss-cpu
!rm -rf /content/4ccae468eb73bf6c4f4de3075ddb5336
!rm -rf /content/preproc
!rm preprocessing.py utils.py
!mkdir -p /content/preproc
!git clone https://gist.github.com/4ccae468eb73bf6c4f4de3075ddb5336.git /content/preproc/
!mv /content/preproc/* /content/
!rm -rf /content/preproc
import numpy as np
import pandas as pd
import re
from tqdm import tqdm
import os
# import yake
from hazm import stopwords_list
from __future__ import unicode_literals
from hazm import *
import pickle
import requests
from termcolor import colored
from preprocessing import cleaning
import time
import plotly.express as px
import plotly.graph_objects as go
from itertools import chain
# for the models
import tensorflow as tf
import matplotlib.pyplot as plt
import re
import string
# BERT base
from transformers import BertTokenizer, BertModel
import torch
import torch.nn as nn
import torch.nn.functional as F
from __future__ import unicode_literals
import torch.nn.functional as FloatingPointError
Base_BERT_Path = 'HooshvareLab/bert-fa-base-uncased'
import faiss
# evaluator
from transformers import BertConfig, BertTokenizer
from transformers import TFBertModel, TFBertForSequenceClassification
from transformers import glue_convert_examples_to_features
from sklearn.model_selection import StratifiedKFold
# sentence BERT
from sentence_transformers import models, SentenceTransformer, util
bert_tokenizer = BertTokenizer.from_pretrained(Base_BERT_Path)
###Output
_____no_output_____
###Markdown
📲 Loading the dataset
###Code
from google.colab import drive
drive.mount('/content/drive')
data_address = '/content/drive/MyDrive/COVID-PSS.xls'
keys_address = '/content/drive/MyDrive/keywords_final_distilled_NE (1).pickle'
cleaned_titles_address = '/content/drive/MyDrive/title_cleaned_without_corona_2.pkl'
df = pd.read_csv(data_address)
list_t = pd.read_pickle(cleaned_titles_address)
keywords = pd.read_pickle(keys_address)
keywords = [v for k,v in keywords.items()]
assert len(keywords) == len(df)
df['keywords'] = keywords
df.drop(columns=['img', 'link'], inplace=True)
tfidf_results = pd.read_pickle('/content/drive/MyDrive/CoPer paper-Models/Results/TFIDF_Ranked.pkl')
sbert_results = pd.read_pickle('/content/drive/MyDrive/CoPer paper-Models/Results/sbert-WikiNli-DifferentWeights_FineTuned_f300-Ranked_2.pkl')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Helpers
###Code
moses = ['است', 'بود', 'شد', 'گشت', 'گردید']
correct_POS = {'AJ': 'ADJ', 'PRO': 'PR', 'P': 'PREP', 'Ne': 'N', 'AJe': 'ADJ'}
def cal_wordiness_level(question, SBJ_answer, OBJ_answer, PP_answer, MOS_answer, V_answer):
wordiness_level = 0
if SBJ_answer != None:
wordiness_level += 20
if bert_tokenizer.tokenize(question)[-1] in moses:
if SBJ_answer != None:
wordiness_level += 20
if PP_answer != None:
wordiness_level += 10
if V_answer != None:
wordiness_level += 20
return wordiness_level
def cal_wordiness_ratio(wordiness_level):
wordiness_ratio = None
if wordiness_level <= 20:
wordiness_ratio = 1
elif 20 < wordiness_level < 30:
wordiness_ratio = 0.80
elif wordiness_level == 30:
wordiness_ratio = 0.60
elif 30 < wordiness_level < 40:
wordiness_ratio = 0.35
elif wordiness_level >= 40:
wordiness_ratio = 0
return wordiness_ratio
correct_POS = {'AJ': 'ADJ', 'PRO': 'PR', 'P': 'PREP', 'Ne': 'N', 'AJe': 'ADJ'}
SBJ_pattern = '^(?!PREP)(PREM|PRENUM(PREP(ADJ)?)?)?((N((ADJ)|(N)+)*(ADJ|N|PR))|(N(ADJ)?)|(PR))(?!POSTP)(CONJ)?'
OBJ_pattern = '^(?!PREP)(PREM|PRENUM(PREP(ADJ)?)?)?((N((ADJ)|(N)+)*(ADJ|N|PR))|(N(ADJ)?)|(PR))(POSTP)'
PP_pattern = '(PREP)(PREM|PRENUM(PREP(ADJ)?)?)?((N((ADJ)|(N)+)*(ADJ|N|PR))|(N(ADJ)?)|(PR))(?!POSTP)(CONJ)?'
MOS_pattern = '^(?!PREP)(PREM|PRENUM(PREP(ADJ)?)?)?((N((ADJ)|(N)+)*(ADJ|N|PR))|(N(ADJ)?)|(PR))(?!POSTP)(CONJ)?'
V_pattern = '(V)*V'
tagger = POSTagger(model='resources/postagger.model')
def tfidf_ratio(question):
tokenized_question = bert_tokenizer.tokenize(question)
sentence_tagged = tagger.tag(tokenized_question)
tag_query = ''.join([each[1] if each[1] not in correct_POS else correct_POS[each[1]] for each in sentence_tagged])
SBJ_answer = re.search(SBJ_pattern, tag_query)
OBJ_answer = re.search(OBJ_pattern, tag_query)
PP_answer = re.search(PP_pattern, tag_query)
MOS_answer = re.search(MOS_pattern, tag_query)
V_answer = re.search(V_pattern, tag_query)
wordiness_level = cal_wordiness_level(question, SBJ_answer, OBJ_answer, PP_answer, MOS_answer, V_answer)
wordiness_ratio = cal_wordiness_ratio(wordiness_level)
return wordiness_ratio
def get_results(tfidf_results, sbert_results, top_n = 50):
"""Takes in both sides-tfidf and sbert then outputs a score"""
# check if both have same number of results
assert len(tfidf_results) == len(sbert_results)
results = []
for i in range(len(tfidf_results)):
print(f'question {i+1} ')
# checking the question
assert tfidf_results[i]['question'] == sbert_results[i]['question']
assert len(tfidf_results[i]['index']) == len(sbert_results[i]['index'])
question = tfidf_results[i]['question']
wordiness = tfidf_ratio(question)
# creating a dataframe for each question
# getting the indices for each bm_selected record
# in order for each model
tfidf_indices = tfidf_results[i]['index']
sbert_indices = sbert_results[i]['index']
# getting the scores for each record
tfidf_scores = tfidf_results[i]['score']
sbert_scores = sbert_results[i]['score']
# standardizing scores
# tfidf
tfidf_max = max(tfidf_scores)
tfidf_scores = tfidf_scores/tfidf_max
# sbert
bert_max = max(sbert_scores)
sbert_scores = sbert_scores/bert_max
# making a df for each
assert len(tfidf_indices) == len(tfidf_scores)
df_scores_tfidf = pd.DataFrame(np.c_[tfidf_indices, tfidf_scores], columns = ['indices', 'scores_tfidf'])
df_scores_sbert = pd.DataFrame(np.c_[sbert_indices, sbert_scores], columns = ['indices', 'scores_sbert'])
# merging them based on the indices
df_all = pd.merge(df_scores_tfidf, df_scores_sbert,
on = 'indices').sort_values(by='scores_tfidf')
# creating an overall score
df_all.loc[:, 'overall_score'] = 0.9 * df_all.scores_sbert + 0.1 * ((wordiness * df_all.scores_tfidf) + ((1 - wordiness) * (df_all.scores_sbert)))
# sorting them and getting the top results
df_all = df_all.sort_values(by='overall_score', ascending=False)
top_k = df_all.indices.values[:top_n]
results.append({'question': question,
'wordiness_rate': wordiness,
'index': top_k})
return results, df_all[:top_n]
###Output
_____no_output_____
###Markdown
Get the results
###Code
results, df_all = get_results(tfidf_results, sbert_results, top_n = 50)
with open('/content/drive/MyDrive/CoPer paper-Models/Results/Ranked_BM25_TFIDF_sbert-WikiNli-DifferentWeights_FineTuned_f300_Ranked_2-0.9s.pkl', 'wb') as f:
pickle.dump(results, f)
###Output
_____no_output_____
|
climate_starter-Copy1.ipynb
|
###Markdown
Reflect Tables into SQLAlchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
inspector = inspect(engine)
###Output
_____no_output_____
###Markdown
Exploratory Climate Analysis
###Code
m_columns = inspector.get_columns('measurement')
for column in m_columns:
print(column["name"], column["type"])
s_columns = inspector.get_columns('station')
for column in s_columns:
print(column["name"], column["type"])
#Latest Date
session.query(Measurement.date).order_by(Measurement.date.desc()).first()
# Calculate the date 1 year ago from the last data point in the database
date = dt.datetime(2016, 8, 23)
sel = [Measurement.date,
Measurement.prcp]
twelvemonths = session.query(*sel).filter(Measurement.date >= date).all()
twelvemonths
# Calculate the date 1 year ago from the last data point in the database
# Perform a query to retrieve the data and precipitation scores
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
# Use Pandas Plotting with Matplotlib to plot the data
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(twelvemonths, columns=['Date', 'Precip Score'])
df.dropna()
df.set_index('Date', inplace = True)
# Sort the dataframe by date
df.sort_values(by=['Date'])
# Use Pandas Plotting with Matplotlib to plot the data
df.plot.bar()
plt.show()
# Use Pandas to calcualte the summary statistics for the precipitation data
mean = df['Precip Score'].mean()
median = df['Precip Score'].median()
mode = df['Precip Score'].mode()
print(f'Mean: {mean} inches')
print(f'Median: {median} inches')
print(f'Mode: {mode} inches')
# Design a query to show how many stations are available in this dataset?
number_stations = session.query(Station).count()
print(f'There are {number_stations} stations')
#Dissect Rows in Station
first_row_s = session.query(Station).first()
first_row_s.__dict__
#Dissect Rows in Measurement
first_row_m = session.query(Measurement).first()
first_row_m.__dict__
# What are the most active stations? (i.e. what stations have the most rows)?
sel_2 = [Measurement.station, Station.name, func.count(Measurement.date)]
station_join = session.query(*sel_2).filter(Measurement.station == Station.station)
observ_desc =station_join.group_by(Measurement.station)\
.order_by(func.count(Measurement.date).desc()).all()
print(f'The below list is organized based on observations per station in descending order:')
observ_desc
# List the stations and the counts in descending order.
# List the most active station
first_station = station_join.group_by(Measurement.station)\
.order_by(func.count(Measurement.date).desc()).first()
station_1= first_station[0]
name_1 = first_station[1]
print(f'The most active station is {name_1} and identified as {station_1}')
#Design a query to retrieve the last 12 months of temperature observation data (TOBS).
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
sel_3 = [Measurement.station, Station.name, Measurement.date, func.min(Measurement.tobs),func.max(Measurement.tobs),func.avg(Measurement.tobs)]
station_join_2 = session.query(*sel_3).filter(Measurement.station == Station.station).filter(Measurement.date>=date)
station_filter = station_join_2.filter(Measurement.station == station_1).first()
lowest_temp = station_filter[3]
highest_temp = station_filter[4]
average_temp = station_filter[5]
print(f'At Station {station_1}, here are a few statistics:')
print(f'Lowest Temp Recorded = {lowest_temp} degrees F')
print(f'Highest Temp Recorded = {highest_temp} degrees F')
print(f'Average Temp Recorded = {average_temp} degrees F')
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
sel_4 = [Measurement.station, Measurement.tobs]
station_join_3 = session.query(*sel_4).filter(Measurement.station == Station.station).filter(Measurement.date>=date)
station_join_31 =station_join_3.filter(Measurement.station == station_1).all()
df_2 = pd.DataFrame(station_join_31, columns=['Station', 'TOBS'])
df_2
df_2.dropna()
#df.set_index('Date', inplace = True)
df_2.plot.hist(bins=12, alpha=0.5)
plt.show()
#df.sort_values(by=['Date'])
# Use Pandas Plotting with Matplotlib to plot the data
#df.plot.bar()
#plt.show()
###Output
_____no_output_____
###Markdown
Bonus Challenge Assignment
###Code
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
###Output
_____no_output_____
###Markdown
Reflect Tables into SQLAlchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite",echo=False)
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
first_row=session.query(Measurement).first()
first_row.__dict__
station_first_row = session.query(Station).first()
station_first_row.__dict__
###Output
_____no_output_____
###Markdown
Exploratory Climate Analysis
###Code
# Calculate the date 1 year ago from the last data point in the database
last_date=session.query(Measurement.date).order_by(Measurement.date.desc()).first()[0]
print(query_date)
year_ago=dt.datetime.strptime(last_date, "%Y-%m-%d") - dt.timedelta(days=366)
print(year_ago)
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Perform a query to retrieve the data and precipitation scores
prcp_lastyr= session.query(Measurement.prcp,Measurement.date).filter(Measurement.date>year_ago).all()
print(prcp_lastyr)
# Save the query results as a Pandas DataFrame and set the index to the date column
prcp_df= pd.DataFrame(prcp_lastyr,columns=['precipitation','date'])
# Sort the dataframe by date
prcp_sortbydate= prcp_df.sort_values(by=['date'],ascending=True).set_index('date')
# prcp_sortbydate
# Use Pandas Plotting with Matplotlib to plot the data
prcp_sortbydate.plot(title="Precipitation graph")
plt.xticks(rotation='vertical')
plt.savefig("12MonthPrecipitation")
# Use Pandas to calcualte the summary statistics for the precipitation data
prcp_sortbydate.describe()
# Design a query to show how many stations are available in this dataset?
for row in session.query(Measurement.station).distinct():
print(row)
# Design a query to show how many stations are available in this dataset?
for row in session.query(Station.station).all():
print(row)
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
most_active=session.query(Measurement.station, func.count(Measurement.station)).group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).first()
most_active
station_num=most_active[0]
station_num
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
lowest_temp= session.query(func.min(Measurement.tobs)).filter(Measurement.station == station_num).all()
highest_temp= session.query(func.max(Measurement.tobs)).filter(Measurement.station == station_num).all()
avg_temp= session.query(func.avg(Measurement.tobs)).filter(Measurement.station == station_num).all()
print(f"Temperature recorded in station {station_num} \n Highest is {highest_temp}\n Lowest is {lowest_temp} \n Average is {avg_temp}")
# Choose the station with the highest number of temperature observations.most_temp=session.query(Measurement.station, func.count(Measurement.tobs)).group_by(Measurement.station).order_by(func.count(Measurement.tobs).desc()).first()
most_temp_station=most_temp[0]
most_temp_station
# Query the last 12 months of temperature observation data for this station
temp_lastyr= session.query(Measurement.tobs,func.count(Measurement.tobs)).filter(Measurement.date>year_ago).filter(Measurement.station == most_temp_station).all()
temp_lastyr
# conve
temp_df= pd.DataFrame(temp_lastyr,columns=['tobs','date'])
temp_sortbydate= temp_df.sort_values(by=['date']).set_index('date')
temp_sortbydate.head()
# plot the results as a histogram
temp_sortbydate.plot.hist(title= "Temp by Frequency",bins=12, color='blue',alpha=0.75)
plt.show()
###Output
_____no_output_____
###Markdown

###Code
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
###Output
[('USC00516128', 'MANOA LYON ARBO 785.2, HI US', 21.3331, -157.8025, 152.4, 0.31), ('USC00519281', 'WAIHEE 837.5, HI US', 21.45167, -157.84888999999998, 32.9, 0.25), ('USC00518838', 'UPPER WAHIAWA 874.3, HI US', 21.4992, -158.0111, 306.6, 0.1), ('USC00513117', 'KANEOHE 838.1, HI US', 21.4234, -157.8015, 14.6, 0.060000000000000005), ('USC00511918', 'HONOLULU OBSERVATORY 702.2, HI US', 21.3152, -157.9992, 0.9, 0.0), ('USC00514830', 'KUALOA RANCH HEADQUARTERS 886.9, HI US', 21.5213, -157.8374, 7.0, 0.0), ('USC00517948', 'PEARL CITY, HI US', 21.3934, -157.9751, 11.9, 0.0), ('USC00519397', 'WAIKIKI 717.2, HI US', 21.2716, -157.8168, 3.0, 0.0), ('USC00519523', 'WAIMANALO EXPERIMENTAL FARM, HI US', 21.33556, -157.71139, 19.5, 0.0)]
###Markdown
Optional Challenge Assignment
###Code
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
###Output
_____no_output_____
###Markdown
Reflect Tables into SQLAlchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
# from flask import Flask, jsonify
# app = Flask(__name__)
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Station = Base.classes.station
Measurement = Base.classes.measurement
# Create our session (link) from Python to the DB
session = Session(engine)
#Get table names using inspect.
inspector = inspect(engine)
inspector.get_table_names()
# Get columns and info in 'measurement'.
columns = inspector.get_columns('measurement')
for column in columns:
print(column["name"], column["type"])
# Get columns and info in 'station'.
columns = inspector.get_columns('station')
for column in columns:
print(column["name"], column["type"])
#Get an idea of what the tables looks like.
engine.execute('SELECT * FROM measurement LIMIT 10').fetchall()
engine.execute('SELECT * FROM station LIMIT 25').fetchall()
###Output
_____no_output_____
###Markdown
Exploratory Precipitation Analysis
###Code
# Find the most recent date in the data set.
latest_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
latest_date
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
# Calculate the date one year from the last date in data set.
year_ago = dt.date(2017, 8, 23) - dt.timedelta(days=365)
year_ago
# Perform a query to retrieve the data and precipitation scores
mdate_1617_results = session.query(Measurement.date, Measurement.prcp).\
filter(Measurement.date >= '2016-08-23').filter(Measurement.date <= '2017-08-23').all()
mdate_1617_results
# Save the query results as a Pandas DataFrame and set the index to the date column
mdate_1617_results_data = pd.DataFrame(mdate_1617_results, columns=['date', 'prcp'])
mdate_1617_results_data.set_index('date')
# Sort the dataframe by date
mdate_1617_results_data.dropna()
mdate_1617_results_data.sort_values(by='date')
# Use Pandas Plotting with Matplotlib to plot the data
mdate_1617_results_data.plot(x="date", y="prcp")
# Set the label for the x-axis and y-axis
plt.xlabel("Date")
plt.ylabel("Inches")
plt.xticks(rotation='90')
# Use Pandas to calcualte the summary statistics for the precipitation data
mdate_1617_results_data.describe()
###Output
_____no_output_____
###Markdown
Exploratory Station Analysis
###Code
# Design a query to calculate the total number stations in the dataset
all_stations = session.query(Station.name).count()
all_stations
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
active_station = session.query(Measurement.station,func.count(Measurement.station)).group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).all()
active_station
# Using the most active station id from the previous query ('USC00519281'), calculate the lowest, highest, and average temperature.
lha_temp = session.query(func.max(Measurement.tobs), func.min(Measurement.tobs),func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00519281').all()
lha_temp
# Using the most active station id
# Query the last 12 months of temperature observation data for this station.
date_temp_1617 = session.query(Measurement.date, Measurement.tobs).\
filter(Measurement.date >= '2016-08-23').filter(Measurement.station == 'USC00519281'.all()
date_temp_data = pd.DataFrame(date_temp_1617, columns=['date', 'tobs'])
date_temp_data.set_index('date')
# Plot the results as a histogram
plt.hist(date_temp_data[tobs], bins=12)
plt.xlabel("Temperature")
plt.ylabel("Frequency")
###Output
_____no_output_____
###Markdown
Close session
###Code
# Close Session
session.close()
###Output
_____no_output_____
###Markdown
Reflect Tables into SQLAlchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func,inspect
from flask import Flask, jsonify
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
Base = automap_base()
# reflect an existing database into a new model
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
inspector = inspect(engine)
inspector.get_table_names()
session=Session(engine)
# Create our session (link) from Python to the DB
columns = inspector.get_columns('measurement')
for column in columns:
print(column['name'], column['type'])
columns = inspector.get_columns('station')
for column in columns:
print(column['name'], column['type'])
###Output
id INTEGER
station TEXT
name TEXT
latitude FLOAT
longitude FLOAT
elevation FLOAT
###Markdown
Exploratory Climate Analysis
###Code
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
# Perform a query to retrieve the data and precipitation scores
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
# Use Pandas Plotting with Matplotlib to plot the data
engine.execute('SELECT date,prcp FROM measurement LIMIT 5').fetchall()
session.query(Measurement.date).order_by(Measurement.date).first()
session.query(Measurement.date).order_by(Measurement.date.desc()).first()
query_date = dt.date(2017,8,23) - dt.timedelta(days=365)
print("QUERYDATE:",query_date)
data=session.query(Measurement.date,Measurement.prcp).filter(Measurement.date >= query_date).all()
df = pd. DataFrame(data)
df.head(5)
df1=df.set_index("date")
df1.sort_values(by=['date'])
df1.head(5)
ax = df1.plot(kind='bar', width=3, figsize=(10,8))
plt.locator_params(axis='x', nbins=6)
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.tick_params(axis='y', labelsize=16)
ax.grid(True)
plt.legend(bbox_to_anchor=(.3,1), fontsize="16")
plt.title("Precipitation Last 12 Months", size=20)
plt.ylabel("Precipitation (Inches)", size=18)
plt.xlabel("Date", size=18)
plt.show
# Use Pandas to calcualte the summary statistics for the precipitation data
df1["prcp"].describe()
# Design a query to show how many stations are available in this dataset?
session.query(Station).count()
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
# d=session.query(Station.station,Measurement.station).first()
station=session.query(Measurement.station,func.count(Measurement.station)).group_by(Measurement.station).order_by(func.count(Measurement.station).desc())
for s in station:
print(s)
station[0], station[0][0]
type (station)
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
active_station =session.query(func.min(Measurement.tobs),func.max(Measurement.tobs),func.avg(Measurement.tobs)).filter(Measurement.station==station[0][0])
# data1=pd.DataFrame(s)
# data1
active_station[0]
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
d=session.query(Measurement.station,func.max(Measurement.tobs)).first()
d
# query_temp = session.query(Measurement.tobs,Measurement.date).filter(dt.date(2017,8,23) - dt.timedelta(days=365)).all()
# query_temp = session.query(Measurement.tobs,Measurement.date).filter(dt.date(2017,8,23) - dt.timedelta(days=365)).all()
query_temp=session.query(Measurement.date,Measurement.tobs).filter(Measurement.station==station[0][0]).filter(Measurement.date>=query_date).all()
query_temp
df2 = pd. DataFrame(query_temp)
df3=df2.set_index("date")
df3.head(5)
ax = df3.plot(kind='hist', width=3, figsize=(12,8))
plt.locator_params(axis='x', nbins=6)
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.tick_params(axis='y', labelsize=16)
ax.grid(True)
plt.legend(bbox_to_anchor=(.3,1), fontsize="16")
plt.title(" Temperature observation data for this station Last 12 Months", size=20)
plt.ylabel("Frequency", size=18)
plt.xlabel("Temperature", size=18)
plt.show
###Output
_____no_output_____
###Markdown
Bonus Challenge Assignment
###Code
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
app = Flask(__name__)
@app.route("/")
def welcome():
"""List all available api routes."""
return (
f"Available Routes:<br/>"
f"/api/v1.0/names<br/>"
f"/api/v1.0/passengers"
)
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
###Output
_____no_output_____
###Markdown
Reflect Tables into SQLAlchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func,inspect
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# Design a query to retrieve the last 12 months of precipitation data and plot the results
inspector=inspect(engine)
inspector.get_table_names()
columns = inspector.get_columns('measurement')
for column in columns:
print(column["name"], column["type"])
columns = inspector.get_columns('station')
for column in columns:
print(column["name"], column["type"])
session.query(func.count(Measurement.date)).all()
date_str= session.query(Measurement.date).order_by(Measurement.date.desc()).first()
print(date_str)
datestored='2017-08-23'
recentdate = datetime.strptime(datestored, '%Y-%m-%d').date()
print(type(recentdate))
print(recentdate) # printed in default formatting
# # session.query(Measurement.date).order_by(Measurement.date.desc()).all() where count=1
# session.query(SELECT DATE('2017-08-23','+1 month'));
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
prev_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
print(prev_year)
# Perform a query to retrieve the data and precipitation scores
results = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= prev_year).order_by(Measurement.date).all()
print(results)
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(results, columns=['date', 'precipitation'])
df.set_index('date',inplace=True)
df.head()
# Sort the dataframe by date
df.sort_values("date")
df.dropna(how='all')
# Use Pandas Plotting with Matplotlib to plot the data
df.plot()
# Use Pandas to calcualte the summary statistics for the precipitation data
df.describe()
# Design a query to show how many stations are available in this dataset?
session.query(Station.station).distinct().count()
# session.query(Measurement.station).distinct().count()
stations = session.query(Measurement).group_by(Measurement.station).count()
print(stations)
# What are the most active stations? (i.e. what stations have the most rows)?
#
active_stations = session.query(Measurement.station, func.count(Measurement.station))\
.group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).all()
active_stations
# List the stations and the counts in descending order.
activestation=active_stations[0]
print (activestation)
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
stationdata=session.query(Measurement.station, func.min(Measurement.tobs),func.max(Measurement.tobs),func.avg(Measurement.tobs))\
.filter(Measurement.station==activestation[0]).all()
# print(stationdata)
stationdata
# all_observations = session.query(Measurement.station,func.min(Measurement.tobs), func.max(Measurement.tobs),func.avg(Measurement.tobs))\
# .filter(Measurement.station == active_stations_val[0]).all()
# all_observations
# Choose the station with the highest number of temperature observations.
hightemp = session.query(Measurement.station, func.count(Measurement.tobs))\
.group_by(Measurement.station).order_by(func.count(Measurement.tobs).desc()).all()
hightemp
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
most_active_tobs = session.query(Measurement.tobs).filter(Measurement.date >= prev_year).filter(Measurement.station==activestation[0]).all()
most_active_tobs
df = pd.DataFrame(most_active_tobs, columns=['tobs'])
df.head()
df.plot.hist( by=column, bins=12)
###Output
_____no_output_____
|
notebooks/WR-train.ipynb
|
###Markdown
This notebook is for training WR and saving the processed embedding
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader,random_split
from torch.optim import Adam
from torch.autograd import Variable
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
from collections import defaultdict
from transformers import AutoTokenizer, AutoModelWithLMHead
from transformers import *
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("..")
import tools.models as models
import tools.dataloaders as dataloaders
import tools.all_test_forBERT as all_test_forBERT
import tools.loaddatasets as loaddatasets
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
#choose bert model
model_name = 'bert-base-uncased'
if model_name == 'bert-base-uncased':
emb_type = 'base'
if model_name == 'bert-large-uncased':
emb_type = 'large'
#random state for PCA
random_state = 42
#list the ds you want to train
D = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,25,30,35,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180,190,200]
#training batch size
batch_size = 32
# learning rate
lr = 2e-4
#training epoch
EPOCHS = 200
# all dataset path for training
word_simi_train_file = 'datasets//word_simi_train.csv'
word_simi_test_file = 'datasets//word_simi_test.csv'
analogy_test_file = 'datasets//word_analogy.csv'
text_simi_test_file = 'datasets//text_simi.csv'
bert_model = BertModel.from_pretrained(model_name)
bert_model.eval()
bert_tokenizer = BertTokenizer.from_pretrained(model_name)
embedding = bert_model.get_input_embeddings()
ids = torch.tensor(range(30522))
E = embedding(ids).detach().numpy()
print('BERT Embedding shape check:', E.shape)
emb_dimension = E.shape[1]
vocab_len = E.shape[0]
pca = PCA(random_state = random_state).fit(E)
# U
E = torch.tensor(E)
U = pca.components_
np.save('trained-embedding//U_%s.npy' % emb_type , U)
U = torch.tensor(U)
print(E.shape)
print(U.shape)
# load datasets
word_simi_train, word_simi_test,analogy_test, text_simi_test = \
loaddatasets.load_datasets(bert_tokenizer, embedding, word_simi_train_file, word_simi_test_file, analogy_test_file, text_simi_test_file)
train_loader = dataloaders.create_data_loader_forBERT(word_simi_train, batch_size, True, dataloaders.Dataset_direct2emb)
test_loader = dataloaders.create_data_loader_forBERT(word_simi_test, batch_size, False, dataloaders.Dataset_direct2emb)
def train_epoch(model, data_loader, loss_fn, optimizer,device):
model = model.train()
losses = []
for step,d in enumerate(data_loader):
emb = d['emb'].to(device)
simi_label = d['simi_label'].to(device)
simi_predict = model(x = emb)
loss = loss_fn(simi_predict, simi_label)
losses.append(loss.item())
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
optimizer.step()
optimizer.zero_grad()
return losses
# training
for d in D:
print(f'D: {d}')
print('~' * 10)
u = U[:d]
u = Variable(torch.tensor(u.T), requires_grad=False).to(device)
print('this time u\'shape is: ', u.shape)
model = models.Percoefficient_Model(emb_dimension = emb_dimension, component_num = d, U = u).to(device)
optimizer = Adam(model.parameters(), lr = lr)
total_steps = len(train_loader) * EPOCHS
loss_fn = nn.MSELoss().to(device)
for epoch in range(EPOCHS):
#标出每个EPOCHS的头部
print('-' * 10)
print(f'Epoch {epoch + 1}/{EPOCHS}')
train_loss = train_epoch(
model,
train_loader,
loss_fn,
optimizer,
device
)
epoch_loss = np.mean(train_loss)
print(f'Train loss {epoch_loss} ')
x = []
for parameters in model.parameters():
print(parameters)
x.append(parameters)
para = x[0].sum(axis = 0).cpu().detach()
u_cpu = u.cpu().detach()
coe = torch.matmul(E,u_cpu)
weighted_coe = torch.mul(para,coe)
weighted_u = torch.matmul(weighted_coe,u_cpu.T)
Emb = (E-weighted_u).numpy()
np.save('trained-embedding/%sEmb_%s.npy' %(emb_type, d),Emb)
torch.save(model,'trained-model/%s_%s_%s.pth' %(emb_type, d, EPOCHS))
print('%s_%s_%s model saved' %(emb_type, d, EPOCHS) )
###Output
D: 1
~~~~~~~~~~
|
Quantum-with-Qiskit/Q92_Grovers_Search_Implementation_Solutions.ipynb
|
###Markdown
$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Solutions for Grover's Search: Implementation _prepared by Maksim Dimitrijev and Özlem Salehi_ Task 2Let $N=4$. Implement the query phase and check the unitary matrix for the query operator. Note that we are interested in the top-left $4 \times 4$ part of the matrix since the remaining parts are due to the ancilla qubit.You are given a function $f$ and its corresponding quantum operator $U_f$. First run the following cell to load operator $U_f$. Then you can make queries to $f$ by applying the operator $U_f$ via the following command:Uf(circuit,qreg).
###Code
%run quantum.py
###Output
_____no_output_____
###Markdown
Now use phase kickback to flip the sign of the marked element: Set output qubit (qreg[2]) to $\ket{-}$ by applying X and H. Apply operator $U_f$ Set output qubit (qreg[2]) back.(Can you guess the marked element by looking at the unitary matrix?) Solution
###Code
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(3)
#No need to define classical register as we are not measuring
mycircuit = QuantumCircuit(qreg)
#set ancilla
mycircuit.x(qreg[2])
mycircuit.h(qreg[2])
Uf(mycircuit,qreg)
#set ancilla back
mycircuit.h(qreg[2])
mycircuit.x(qreg[2])
job = execute(mycircuit,Aer.get_backend('unitary_simulator'))
u=job.result().get_unitary(mycircuit,decimals=3)
#We are interested in the top-left 4x4 part
for i in range(4):
s=""
for j in range(4):
val = str(u[i][j].real)
while(len(val)<5): val = " "+val
s = s + val
print(s)
mycircuit.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Task 3Let $N=4$. Implement the inversion operator and check whether you obtain the following matrix:$\mymatrix{cccc}{-0.5 & 0.5 & 0.5 & 0.5 \\ 0.5 & -0.5 & 0.5 & 0.5 \\ 0.5 & 0.5 & -0.5 & 0.5 \\ 0.5 & 0.5 & 0.5 & -0.5}$. Solution
###Code
def inversion(circuit,quantum_reg):
#step 1
circuit.h(quantum_reg[1])
circuit.h(quantum_reg[0])
#step 2
circuit.x(quantum_reg[1])
circuit.x(quantum_reg[0])
#step 3
circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[2])
#step 4
circuit.x(quantum_reg[1])
circuit.x(quantum_reg[0])
#step 5
circuit.x(quantum_reg[2])
#step 6
circuit.h(quantum_reg[1])
circuit.h(quantum_reg[0])
###Output
_____no_output_____
###Markdown
Below you can check the matrix of your inversion operator and how the circuit looks like. We are interested in top-left $4 \times 4$ part of the matrix, the remaining parts are because we used ancilla qubit.
###Code
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg1 = QuantumRegister(3)
mycircuit1 = QuantumCircuit(qreg1)
#set ancilla qubit
mycircuit1.x(qreg1[2])
mycircuit1.h(qreg1[2])
inversion(mycircuit1,qreg1)
#set ancilla qubit back
mycircuit1.h(qreg1[2])
mycircuit1.x(qreg1[2])
job = execute(mycircuit1,Aer.get_backend('unitary_simulator'))
u=job.result().get_unitary(mycircuit1,decimals=3)
for i in range(4):
s=""
for j in range(4):
val = str(u[i][j].real)
while(len(val)<5): val = " "+val
s = s + val
print(s)
mycircuit1.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Task 4: Testing Grover's searchNow we are ready to test our operations and run Grover's search. Suppose that there are 4 elements in the list and try to find the marked element.You are given the operator $U_f$. First run the following cell to load it. You can access it via Uf(circuit,qreg).qreg[2] is the ancilla qubit and it is shared by the query and the inversion operators. Which state do you observe the most?
###Code
%run quantum.py
###Output
_____no_output_____
###Markdown
Solution
###Code
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(3)
creg = ClassicalRegister(2)
mycircuit = QuantumCircuit(qreg,creg)
#Grover
#initial step - equal superposition
for i in range(2):
mycircuit.h(qreg[i])
#set ancilla
mycircuit.x(qreg[2])
mycircuit.h(qreg[2])
mycircuit.barrier()
#change the number of iterations
iterations=1
#Grover's iterations.
for i in range(iterations):
#query
Uf(mycircuit,qreg)
mycircuit.barrier()
#inversion
inversion(mycircuit,qreg)
mycircuit.barrier()
#set ancilla back
mycircuit.h(qreg[2])
mycircuit.x(qreg[2])
mycircuit.measure(qreg[0],creg[0])
mycircuit.measure(qreg[1],creg[1])
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
# print the outcome
for outcome in counts:
print(outcome,"is observed",counts[outcome],"times")
mycircuit.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Task 5 (Optional, challenging)Implement the inversion operation for $n=3$ ($N=8$). This time you will need 5 qubits - 3 for the operation, 1 for ancilla, and one more qubit to implement not gate controlled by three qubits.In the implementation the ancilla qubit will be qubit 3, while qubits for control are 0, 1 and 2; qubit 4 is used for the multiple control operation. As a result you should obtain the following values in the top-left $8 \times 8$ entries:$\mymatrix{cccccccc}{-0.75 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.25 & -0.75 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.25 & 0.25 & -0.75 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.25 & 0.25 & 0.25 & -0.75 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.25 & 0.25 & 0.25 & 0.25 & -0.75 & 0.25 & 0.25 & 0.25 \\ 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & -0.75 & 0.25 & 0.25 \\ 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & -0.75 & 0.25 \\ 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & -0.75}$. Solution
###Code
def big_inversion(circuit,quantum_reg):
for i in range(3):
circuit.h(quantum_reg[i])
circuit.x(quantum_reg[i])
circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[4])
circuit.ccx(quantum_reg[2],quantum_reg[4],quantum_reg[3])
circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[4])
for i in range(3):
circuit.x(quantum_reg[i])
circuit.h(quantum_reg[i])
circuit.x(quantum_reg[3])
###Output
_____no_output_____
###Markdown
Below you can check the matrix of your inversion operator. We are interested in the top-left $8 \times 8$ part of the matrix, the remaining parts are because of additional qubits.
###Code
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
big_qreg2 = QuantumRegister(5)
big_mycircuit2 = QuantumCircuit(big_qreg2)
#set ancilla
big_mycircuit2.x(big_qreg2[3])
big_mycircuit2.h(big_qreg2[3])
big_inversion(big_mycircuit2,big_qreg2)
#set ancilla back
big_mycircuit2.h(big_qreg2[3])
big_mycircuit2.x(big_qreg2[3])
job = execute(big_mycircuit2,Aer.get_backend('unitary_simulator'))
u=job.result().get_unitary(big_mycircuit2,decimals=3)
for i in range(8):
s=""
for j in range(8):
val = str(u[i][j].real)
while(len(val)<6): val = " "+val
s = s + val
print(s)
###Output
_____no_output_____
###Markdown
Task 6: Testing Grover's search for 8 elements (Optional, challenging)Now we will test Grover's search on 8 elements.You are given the operator $U_{f_8}$. First run the following cell to load it. You can access it via:Uf_8(circuit,qreg) Which state do you observe the most?
###Code
%run quantum.py
###Output
_____no_output_____
###Markdown
Solution
###Code
def big_inversion(circuit,quantum_reg):
for i in range(3):
circuit.h(quantum_reg[i])
circuit.x(quantum_reg[i])
circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[4])
circuit.ccx(quantum_reg[2],quantum_reg[4],quantum_reg[3])
circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[4])
for i in range(3):
circuit.x(quantum_reg[i])
circuit.h(quantum_reg[i])
circuit.x(quantum_reg[3])
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg8 = QuantumRegister(5)
creg8 = ClassicalRegister(3)
mycircuit8 = QuantumCircuit(qreg8,creg8)
#set ancilla
mycircuit8.x(qreg8[3])
mycircuit8.h(qreg8[3])
#Grover
for i in range(3):
mycircuit8.h(qreg8[i])
mycircuit8.barrier()
#Try 1,2,6,12 8iterations of Grover
for i in range(2):
Uf_8(mycircuit8,qreg8)
mycircuit8.barrier()
big_inversion(mycircuit8,qreg8)
mycircuit8.barrier()
#set ancilla back
mycircuit8.h(qreg8[3])
mycircuit8.x(qreg8[3])
for i in range(3):
mycircuit8.measure(qreg8[i],creg8[i])
job = execute(mycircuit8,Aer.get_backend('qasm_simulator'),shots=10000)
counts8 = job.result().get_counts(mycircuit8)
# print the reverse of the outcome
for outcome in counts8:
print(outcome,"is observed",counts8[outcome],"times")
mycircuit8.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Task 8Implement an oracle function which marks the element 00. Run Grover's search with the oracle you have implemented.
###Code
def oracle_00(circuit,qreg):
###Output
_____no_output_____
###Markdown
Solution
###Code
def oracle_00(circuit,qreg):
circuit.x(qreg[0])
circuit.x(qreg[1])
circuit.ccx(qreg[0],qreg[1],qreg[2])
circuit.x(qreg[0])
circuit.x(qreg[1])
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(3)
creg = ClassicalRegister(2)
mycircuit = QuantumCircuit(qreg,creg)
#Grover
#initial step - equal superposition
for i in range(2):
mycircuit.h(qreg[i])
#set ancilla
mycircuit.x(qreg[2])
mycircuit.h(qreg[2])
mycircuit.barrier()
#change the number of iterations
iterations=1
#Grover's iterations.
for i in range(iterations):
#query
oracle_00(mycircuit,qreg)
mycircuit.barrier()
#inversion
inversion(mycircuit,qreg)
mycircuit.barrier()
#set ancilla back
mycircuit.h(qreg[2])
mycircuit.x(qreg[2])
mycircuit.measure(qreg[0],creg[0])
mycircuit.measure(qreg[1],creg[1])
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
# print the reverse of the outcome
for outcome in counts:
reverse_outcome = ''
for i in outcome:
reverse_outcome = i + reverse_outcome
print(reverse_outcome,"is observed",counts[outcome],"times")
mycircuit.draw(output='mpl')
###Output
_____no_output_____
|
notebooks/__debugging/TEST_metadata_ascii_parser_IPTS_20444.ipynb
|
###Markdown
Select Your IPTS
###Code
# #from __code.filename_metadata_match import FilenameMetadataMatch
# from __code import system
# system.System.select_working_dir()
# from __code.__all import custom_style
# custom_style.style()
import ipywe.fileselector
from IPython.core.display import HTML
from __code.time_utility import RetrieveTimeStamp
import os
class FilenameMetadataMatch(object):
data_folder = ''
metadata_file = ''
list_data_time_stamp = None
def __init__(self, working_dir='./'):
self.working_dir = working_dir
def select_input_folder(self):
_instruction = "Select Input Folder ..."
self.input_folder_ui = ipywe.fileselector.FileSelectorPanel(instruction=_instruction,
start_dir=self.working_dir,
next=self.select_input_folder_done,
type='directory',
)
self.input_folder_ui.show()
def select_input_folder_done(self, folder):
self.data_folder = folder
display(HTML('Folder Selected: <span style="font-size: 20px; color:green">' + folder))
def select_metadata_file(self):
_instruction = "Select Metadata File ..."
self.metadata_ui = ipywe.fileselector.FileSelectorPanel(instruction=_instruction,
start_dir=self.working_dir,
next=self.select_metadata_file_done,
)
self.metadata_ui.show()
def select_metadata_file_done(self, metadata_file):
self.metadata_file = metadata_file
display(HTML('Metadata File Selected: <span style="font-size: 20px; color:green">' + metadata_file))
def retrieve_time_stamp(self):
o_retriever = RetrieveTimeStamp(folder=self.data_folder)
o_retriever._run()
self.list_data_time_stamp = o_retriever
def load_metadata(self):
metadata_file = self.metadata_file
###Output
_____no_output_____
###Markdown
Select Input Folder This is where we select the folder of images that we will need to match with the metadat
###Code
#o_match = FilenameMetadataMatch(working_dir=system.System.get_working_dir())
o_match = FilenameMetadataMatch(working_dir='/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-20444-Regina/')
o_match.select_input_folder()
###Output
_____no_output_____
###Markdown
Select Metadata File We need to select here the metadata file (*.mpt)
###Code
o_match.select_metadata_file()
###Output
_____no_output_____
###Markdown
Retrieve Time Stamp
###Code
o_match.retrieve_time_stamp()
###Output
_____no_output_____
###Markdown
Load Metadata File DEBUGGING STARTS HERE
###Code
import pandas as pd
import codecs
from __code.file_handler import get_file_extension
from ipywidgets import widgets
import os
import pprint
#metadata_file = o_match.metadata_file
import glob
import platform
if platform.node() == 'mac95470':
git_dir = os.path.abspath(os.path.expanduser('~/git/'))
else:
git_dir = '/Volumes/my_book_thunderbolt_duo/git/'
metadata_list_files = glob.glob(git_dir + '/standards/ASCII/*.mpt')
index_file = 2
metadata_file = metadata_list_files[index_file]
print("Loading file: {}".format(metadata_file))
assert os.path.exists(metadata_file)
###Output
_____no_output_____
###Markdown
**Allow users to define:** * reference_line_showing_end_of_metadata * start_of_data_after_how_many_lines_from_reference_line * index or label of time info column in big table
###Code
from __code.metadata_ascii_parser import *
o_meta = MetadataFileParser(filename=metadata_file,
meta_type='mpt',
time_label='time/s',
reference_line_showing_end_of_metadata='Number of loops',
end_of_metadata_after_how_many_lines_from_reference_line=1)
o_meta.parse()
o_meta.select_data_to_keep()
o_meta.keep_only_columns_of_data_of_interest()
o_meta.select_output_location()
# data = o_meta.get_data()
# metadata_to_keep = np.array(o_meta.box.children[1].value)
# new_data = data[metadata_to_keep]
# my_data = new_data.reset_index()
# my_data.rename(index=str, columns={"index": "TimeStamp"})
###Output
_____no_output_____
|
src/03_repo_stats/02_repo_stats_corr.ipynb
|
###Markdown
Repo stats correlation Crystal
###Code
import pandas as pd
import numpy as np
# generate related variables
from numpy import mean
from numpy import std
from numpy.random import randn
from numpy.random import seed
from matplotlib import pyplot
df = pd.read_csv('/home/zz3hs/git/dspg21oss/data/dspg21oss/repo_stats_0707.csv') #import csv
df
# Forks vs stars
r = np.corrcoef(df["forks"], df["stars"])
print("Peason's correlation:", r[0,1])
pyplot.scatter(df["forks"], df["stars"])
pyplot.xlabel("forks")
pyplot.ylabel("stars")
pyplot.show()
# stars vs watchers
r = np.corrcoef(df["stars"], df["watchers"])
print("Peason's correlation between stars and watchers:", r[0,1])
pyplot.scatter(df["stars"], df["watchers"])
pyplot.xlabel("stars")
pyplot.ylabel("watchers")
pyplot.show()
# watchers vs forks
r = np.corrcoef(df["watchers"], df["forks"])
print("Peason's correlation between watchers and forks:", r[0,1])
pyplot.scatter(df["watchers"], df["forks"])
pyplot.xlabel("watchers")
pyplot.ylabel("forks")
pyplot.show()
###Output
Peason's correlation between watchers and forks: 0.8394335455138852
|
3-object-tracking-and-localization/activities/8-vehicle-motion-and-calculus/10. Approaching Instantaneous Speed.ipynb
|
###Markdown
TODO0. Keep working off the ball throwing example. Write code that 1. zooms in on a specified window 2. gets slope near point 3. plots tangent line1. Last time you saw continuous trajectories w/ a known equation...2. Which meant that we could just read off position and velocity3. All trajectories are continuous... but they don't have a known equation.
###Code
import numpy as np
from matplotlib import pyplot as plt
def spring_position(t):
return np.exp(-t) * -np.sin(np.pi*t)
def spring_velocity(t):
term_1 = -np.exp(-t) * np.sin(np.pi*t)
term_2 = np.pi * np.exp(-t) * np.cos(np.pi*t)
return -(term_1 + term_2)
t = np.linspace(0,6, 1000)
x = np.exp(-0.5 * t) * -np.sin(np.pi * t)
plt.plot(t,x)
plt.show()
def show_spring_motion(t_min, t_max, with_velocity=False):
t = np.linspace(t_min,t_max, 1000)
x = spring_position(t)
plt.plot(t,x)
if with_velocity:
v = spring_velocity(t)
plt.plot(t,v)
plt.show()
show_spring_motion(0,6)
# how fast was it going in the very beginning?
show_spring_motion(0, 1)
show_spring_motion(0, 0.5)
show_spring_motion(0,0.2)
show_spring_motion(0, 0.1)
show_spring_motion(0, 0.001)
delta_y = (-0.0032 - 0.0000)
delta_t = (0.0010 - 0.0000)
speed = delta_y / delta_t
print(speed)
###Output
-3.2
|
courses/working with dates/working_dates.ipynb
|
###Markdown
Subtracting datesPython date objects let us treat calendar dates as something similar to numbers: we can compare them, sort them, add, and even subtract them. This lets us do math with dates in a way that would be a pain to do by hand.The 2007 Florida hurricane season was one of the busiest on record, with 8 hurricanes in one year. The first one hit on May 9th, 2007, and the last one hit on December 13th, 2007. How many days elapsed between the first and last hurricane in 2007?
###Code
# Import date
from datetime import date
# Create a date object for May 9th, 2007
start = date(2007, 5, 9)
# Create a date object for December 13th, 2007
end = date(2007, 12, 13)
# Subtract the two dates and print the number of days
print((end - start).days)
###Output
218
###Markdown
Counting events per calendar monthHurricanes can make landfall in Florida throughout the year. As we've already discussed, some months are more hurricane-prone than others.Using florida_hurricane_dates, let's see how hurricanes in Florida were distributed across months throughout the year.We've created a dictionary called hurricanes_each_month to hold your counts and set the initial counts to zero. You will loop over the list of hurricanes, incrementing the correct month in hurricanes_each_month as you go, and then print the result.
###Code
import bz2
import pickle
with open("datasets/florida_hurricane_dates.pkl", "rb") as fp:
florida_hurricane_dates = pickle.load(fp)
display (florida_hurricane_dates[:10])
# A dictionary to count hurricanes per calendar month
hurricanes_each_month = {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6:0,
7: 0, 8:0, 9:0, 10:0, 11:0, 12:0}
# A dictionary to count hurricanes per calendar month
hurricanes_each_month = {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6:0,
7: 0, 8:0, 9:0, 10:0, 11:0, 12:0}
# Loop over all hurricanes
for hurricane in florida_hurricane_dates:
# Pull out the month
month = hurricane.month
# Increment the count in your dictionary by one
hurricanes_each_month[month] += 1
print(hurricanes_each_month)
###Output
{1: 0, 2: 1, 3: 0, 4: 1, 5: 8, 6: 32, 7: 21, 8: 49, 9: 70, 10: 43, 11: 9, 12: 1}
###Markdown
Sort dates
###Code
# Print the first and last scrambled dates
print(florida_hurricane_dates[0])
print(florida_hurricane_dates[-1])
# Put the dates in order
florida_hurricane_dates_ordered = sorted(florida_hurricane_dates)
# Print the first and last ordered dates
print(florida_hurricane_dates_ordered[0])
print(florida_hurricane_dates_ordered[-1])
import sys
sys.version
# Assign the earliest date to first_date
first_date = min(florida_hurricane_dates)
# Convert to ISO and US formats
iso = "Our earliest hurricane date: " + first_date.isoformat()
us = "Our earliest hurricane date: " + first_date.strftime("%m/%d/%Y")
print("ISO: " + iso)
print("US: " + us)
# Import datetime
from datetime import datetime
# Create a datetime object
dt = datetime(2017, 12, 31, 15, 19, 13)
# Replace the year with 1917
dt_old = dt.replace(year=1917)
# Print the results in ISO 8601 format
print(dt_old)
# Import datetime
from datetime import datetime
# Starting timestamps
timestamps = [1514665153, 1514664543]
# Datetime objects
dts = []
# Loop
for ts in timestamps:
dts.append(datetime.fromtimestamp(ts))
# Print results
print(dts)
###Output
[datetime.datetime(2017, 12, 30, 21, 19, 13), datetime.datetime(2017, 12, 30, 21, 9, 3)]
###Markdown
UTC timezone format Create timezone objects and set it to datetime ones using timedelta
###Code
# Import datetime, timezone
from datetime import datetime, timezone
# October 1, 2017 at 15:26:26, UTC
dt = datetime(2017, 10, 1, 15, 26, 26, tzinfo=timezone.utc)
# Print results
print(dt.isoformat())
# Import datetime, timedelta, timezone
from datetime import datetime, timedelta, timezone
# Create a timezone for Pacific Standard Time, or UTC-8
pst = timezone(timedelta(hours=-8))
# October 1, 2017 at 15:26:26, UTC-8
dt = datetime(2017, 10, 1, 15, 26, 26, tzinfo=pst)
# Print results
print(dt.isoformat())
###Output
2017-10-01T15:26:26-08:00
###Markdown
Add or replace timezones
###Code
df = pd.read_csv('datasets/capital-onebike.csv')
display(df.head())
print(df.shape)
W20529_rides_raw = df[df['Bike number'] == 'W20529']
display(W20529_rides_raw.head())
print(W20529_rides_raw.shape)
W20529_rides_raw['Start date']
W20529_rides = W20529_rides_raw.rename(columns = {'Start date':'start', 'End date':'end'})
W20529_rides['start'] = pd.to_datetime(W20529_rides_cleaned['start'])
W20529_rides['end'] = pd.to_datetime(W20529_rides_cleaned['end'])
onebike_datetimes = W20529_rides.loc[:,['start', 'end']].to_dict('records')
onebike_datetimes[:5]
from datetime import datetime, timezone, timedelta
# Create a timezone object corresponding to UTC-4
edt = timezone(timedelta(hours=-4))
# Loop over trips, updating the start and end datetimes to be in UTC-4
for trip in onebike_datetimes[:10]:
# Update trip['start'] and trip['end']
trip['start'] = trip['start'].replace(tzinfo=edt)
trip['end'] = trip['end'].replace(tzinfo=edt)
onebike_datetimes[:5]
# Loop over the trips
for trip in onebike_datetimes[:10]:
# Pull out the start
dt = trip['start']
# Move dt to be in UTC
dt = dt.astimezone(timezone.utc)
# Print the start time in UTC
print('Original:', trip['start'], '| UTC:', dt.isoformat())
from datetime import datetime
from dateutil import tz
et = tz.gettz('Europe/Madrid')
print (et)
last = datetime(201)
# Import tz
from dateutil import tz
# Create a timezone object for Eastern Time
et = tz.gettz('America/New_York')
# Loop over trips, updating the datetimes to be in Eastern Time
for trip in onebike_datetimes[:10]:
# Update trip['start'] and trip['end']
trip['start'] = trip['start'].replace(tzinfo=et)
trip['end'] = trip['end'].replace(tzinfo=et)
# Create the timezone object
ist = tz.gettz('Asia/Kolkata')
# Pull out the start of the first trip
local = onebike_datetimes[0]['start']
# What time was it in India?
notlocal = local.astimezone(ist)
# Print them out and see the difference
print(local.isoformat())
print(notlocal.isoformat())
###Output
2017-10-01T15:26:26-04:00
2017-10-02T00:56:26+05:30
###Markdown
Daylight savings How many hours elapsed around daylight saving?Since our bike data takes place in the fall, you'll have to do something else to learn about the start of daylight savings time.Let's look at March 12, 2017, in the Eastern United States, when Daylight Saving kicked in at 2 AM.If you create a datetime for midnight that night, and add 6 hours to it, how much time will have elapsed?
###Code
# Import datetime, timedelta, tz, timezone
from datetime import datetime, timedelta, timezone
from dateutil import tz
# Start on March 12, 2017, midnight, then add 6 hours
start = datetime(2017, 3, 12, tzinfo = tz.gettz('America/New_York'))
end = start + timedelta(hours=6)
print(start.isoformat() + " to " + end.isoformat())
# Import datetime, timedelta, tz, timezone
from datetime import datetime, timedelta, timezone
from dateutil import tz
# Start on March 12, 2017, midnight, then add 6 hours
start = datetime(2017, 3, 12, tzinfo = tz.gettz('America/New_York'))
end = start + timedelta(hours=6)
print(start.isoformat() + " to " + end.isoformat())
# How many hours have elapsed?
print((end - start).total_seconds()/(60*60))
# What if we move to UTC?
print((end.astimezone(timezone.utc) - start.astimezone(timezone.utc))\
.total_seconds()/(60*60))
###Output
2017-03-12T00:00:00-05:00 to 2017-03-12T06:00:00-04:00
6.0
5.0
###Markdown
Finding ambiguous datetimesAt the end of lesson 2, we saw something anomalous in our bike trip duration data. Let's see if we can identify what the problem might be.The data is loaded as onebike_datetimes, and tz has already been imported from dateutil.
###Code
# Loop over trips
for i, trip in enumerate(onebike_datetimes):
trip['start'] = trip['start'].replace(tzinfo=tz.gettz('America/New_York'))
trip['end'] = trip['end'].replace(tzinfo=tz.gettz('America/New_York'))
# if(i%10 == 0):
# print(i, trip)
for i,trip in enumerate(onebike_datetimes):
# Rides with ambiguous start
if tz.datetime_ambiguous(trip['start']):
print("Ambiguous start at " + str(trip['start']))
# Rides with ambiguous end
if tz.datetime_ambiguous(trip['end']):
print("Ambiguous end at " + str(trip['end']))
###Output
Ambiguous start at 2017-11-05 01:01:04-04:00
Ambiguous end at 2017-11-05 01:01:04-04:00
###Markdown
Cleaning daylight saving data with foldAs we've just discovered, there is a ride in our data set which is being messed up by a Daylight Savings shift. Let's clean up the data set so we actually have a correct minimum ride length. We can use the fact that we know the end of the ride happened after the beginning to fix up the duration messed up by the shift out of Daylight Savings.Since Python does not handle tz.enfold() when doing arithmetic, we must put our datetime objects into UTC, where ambiguities have been resolved.onebike_datetimes is already loaded and in the right timezone. tz and timezone have been imported. Use tz.UTC for the timezone. Loading a csv file in PandasThe capital_onebike.csv file covers the October, November and December rides of the Capital Bikeshare bike W20529.
###Code
rides = pd.read_csv('datasets/capital-onebike.csv', parse_dates = ['Start date', 'End date'])
display(df.head())
###Output
_____no_output_____
###Markdown
Making timedelta columnsEarlier in this course, you wrote a loop to subtract datetime objects and determine how long our sample bike had been out of the docks. Now you'll do the same thing with Pandas.
###Code
# Subtract the start date from the end date
ride_durations = rides['End date'] - rides['Start date']
# Convert the results to seconds
rides['Duration'] = ride_durations.dt.total_seconds()
print(rides['Duration'].head())
###Output
0 181.0
1 7622.0
2 343.0
3 1278.0
4 1277.0
Name: Duration, dtype: float64
###Markdown
Summarizing Datetime How many joyrides?Suppose you have a theory that some people take long bike rides before putting their bike back in the same dock. Let's call these rides "joyrides".You only have data on one bike, so while you can't draw any bigger conclusions, it's certainly worth a look.Are there many joyrides? How long were they in our data set? Use the median instead of the mean, because we know there are some very long trips in our data set that might skew the answer, and the median is less sensitive to outliers.
###Code
# Create joyrides
joyrides = (rides['Start station'] == rides['End station'])
# Total number of joyrides
print("{} rides were joyrides".format(joyrides.sum()))
# Median of all rides
print("The median duration overall was {:.2f} seconds"\
.format(rides['Duration'].median()))
# Median of joyrides
print("The median duration for joyrides was {:.2f} seconds"\
.format(rides[joyrides]['Duration'].median()))
###Output
6 rides were joyrides
The median duration overall was 660.00 seconds
The median duration for joyrides was 2642.50 seconds
###Markdown
It's getting cold outside, W20529Washington, D.C. has mild weather overall, but the average high temperature in October (68ºF / 20ºC) is certainly higher than the average high temperature in December (47ºF / 8ºC). People also travel more in December, and they work fewer days so they commute less.How might the weather or the season have affected the length of bike trips?
###Code
# Import matplotlib
import matplotlib.pyplot as plt
# Resample rides to daily, take the size, plot the results
rides.resample('D', on = 'Start date')\
.size()\
.plot(ylim = [0, 15])
# Show the results
plt.show()
###Output
_____no_output_____
###Markdown
Since the daily time series is so noisy for this one bike, change the resampling to be monthly.
###Code
# Import matplotlib
import matplotlib.pyplot as plt
# Resample rides to monthly, take the size, plot the results
rides.resample('M', on = 'Start date')\
.size()\
.plot(ylim = [0, 150])
# Show the results
plt.show()
###Output
_____no_output_____
###Markdown
Nice! As you can see, the pattern is clearer at the monthly level: there were fewer rides in November, and then fewer still in December, possibly because the temperature got colder. Members vs casual riders over timeRiders can either be "Members", meaning they pay yearly for the ability to take a bike at any time, or "Casual", meaning they pay at the kiosk attached to the bike dock.Do members and casual riders drop off at the same rate over October to December, or does one drop off faster than the other?As before, rides has been loaded for you. You're going to use the Pandas method .value_counts(), which returns the number of instances of each value in a Series. In this case, the counts of "Member" or "Casual".
###Code
# Resample rides to be monthly on the basis of Start date
monthly_rides = rides.resample('M', on='Start date')['Member type']
# Take the ratio of the .value_counts() over the total number of rides
print(monthly_rides.value_counts() / monthly_rides.size())
###Output
Start date Member type
2017-10-31 Member 0.768519
Casual 0.231481
2017-11-30 Member 0.825243
Casual 0.174757
2017-12-31 Member 0.860759
Casual 0.139241
Name: Member type, dtype: float64
###Markdown
Nice! Note that by default, .resample() labels Monthly resampling with the last day in the month and not the first. It certainly looks like the fraction of Casual riders went down as the number of rides dropped. With a little more digging, you could figure out if keeping Member rides only would be enough to stabilize the usage numbers throughout the fall. Combining groupby() and resample()A very powerful method in Pandas is .groupby(). Whereas .resample() groups rows by some time or date information, .groupby() groups rows based on the values in one or more columns. For example, rides.groupby('Member type').size() would tell us how many rides there were by member type in our entire DataFrame..resample() can be called after .groupby(). For example, how long was the median ride by month, and by Membership type?
###Code
# Group rides by member type, and resample to the month
grouped = rides.groupby('Member type')\
.resample('M', on='Start date')
# Print the median duration for each group
print(grouped['Duration'].median())
###Output
Member type Start date
Casual 2017-10-31 1636.0
2017-11-30 1159.5
2017-12-31 850.0
Member 2017-10-31 671.0
2017-11-30 655.0
2017-12-31 387.5
Name: Duration, dtype: float64
###Markdown
Nice! It looks like casual riders consistently took longer rides, but that both groups took shorter rides as the months went by. Note that, by combining grouping and resampling, you can answer a lot of questions about nearly any data set that includes time as a feature. Keep in mind that you can also group by more than one column at once. Timezones in PandasEarlier in this course, you assigned a timezone to each datetime in a list. Now with Pandas you can do that with a single method call.(Note that, just as before, your data set actually includes some ambiguous datetimes on account of daylight saving; for now, we'll tell Pandas to not even try on those ones. Figuring them out would require more work.)
###Code
# Localize the Start date column to America/New_York
rides['Start date'] = rides['Start date'].dt.tz_localize('America/New_york', ambiguous='NaT')
# Print first value
print(rides['Start date'].iloc[0])
###Output
2017-10-01 15:23:25-04:00
###Markdown
Now switch the Start date column to the timezone 'Europe/London' using the .dt.tz_convert() method.
###Code
# Print first value
print(rides['Start date'].iloc[0])
# Convert the Start date column to Europe/London
rides['Start date'] = rides['Start date'].dt.tz_convert('Europe/London')
# Print the new value
print(rides['Start date'].iloc[0])
###Output
2017-10-01 15:23:25-04:00
2017-10-01 20:23:25+01:00
###Markdown
How long per weekday?Pandas has a number of datetime-related attributes within the .dt accessor. Many of them are ones you've encountered before, like .dt.month. Others are convenient and save time compared to standard Python, like .dt.weekday_name.
###Code
# Add a column for the weekday of the start of the ride
rides['Ride start weekday'] = rides['Start date'].dt.day_name(locale = 'English')
help(rides.loc[0,'Ride start weekday'])
# Print the median trip time per weekday
print(rides.groupby('Ride start weekday')['Duration'].median())
rides.loc[:,'Start date'].dt.day_name.
###Output
_____no_output_____
###Markdown
How long between rides?For your final exercise, let's take advantage of Pandas indexing to do something interesting. How much time elapsed between rides?
###Code
rides = pd.read_csv('datasets/capital-onebike.csv', parse_dates = ['Start date', 'End date'])
display(df.head())
# Localize the Start date column to America/New_York
rides['Start date'] = rides['Start date'].dt.tz_localize('America/New_york', ambiguous='NaT')
rides['End date'] = rides['End date'].dt.tz_localize('America/New_york', ambiguous='NaT')
# Shift the index of the end date up one; now subract it from the start date
rides['Time since'] = rides['Start date'] - (rides['End date'].shift(1))
# Move from a timedelta to a number of seconds, which is easier to work with
rides['Time since'] = rides['Time since'].dt.total_seconds()
# Resample to the month
monthly = rides.resample('M', on='Start date')
# Print the average hours between rides each month
print(monthly['Time since'].mean()/(60*60))
###Output
_____no_output_____
|
notebooks/AngularVelocity3D.ipynb
|
###Markdown
Angular velocity in 3D movements> Renato Naville Watanabe, Marcos Duarte > [Laboratory of Biomechanics and Motor Control](http://pesquisa.ufabc.edu.br/bmclab) > Federal University of ABC, Brazil Contents1 Axis of rotation2 Computing the angular velocity2.1 1 ) 3D pendulum bar 3 Further reading4 Problems5 References An usual problem found in Biomechanics (and Mechanics in general) is to find the angular velocity of an object. We consider that a basis $\hat{\boldsymbol e_1}$, $\hat{\boldsymbol e_2}$ and $\hat{\boldsymbol e_3}$ is attached to the body and is known. To learn how to find a basis of a frame of reference, see [this notebook](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb). Axis of rotationAs in the planar movement, the angular velocity is a vector perpendicular to the rotation. The line in the direction of the angular velocity vector is known as the axis of rotation. The rotation beween two frames of reference is characterized by the rotation matrix $R$ obtained by stacking the versors $\hat{\boldsymbol e_1}$, $\hat{\boldsymbol e_2}$ and $\hat{\boldsymbol e_3}$ in each column of the matrix (for a revision on rotation matrices see [this](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation2D.ipynb) and [this](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation3D.ipynb) notebooks). A vector in the direction of the axis of rotation is a vector that does not changes the position after the rotation. That is: \begin{equation}v = Rv\end{equation}This vector is the eigenvector of the rotation matrix $R$ with eigenvalue equal to one. Below the yellow arrow indicates the axis of rotation of the rotation between the position of the reference frame $\hat{\boldsymbol i}$, $\hat{\boldsymbol j}$ and $\hat{\boldsymbol k}$ and the reference frame of $\hat{\boldsymbol e_1}$, $\hat{\boldsymbol e_2}$ and $\hat{\boldsymbol e_3}$.
###Code
from IPython.core.display import Math, display
import sympy as sym
import sys
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
import numpy as np
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0],
[0, sym.cos(a), -sym.sin(a)],
[0, sym.sin(a), sym.cos(a)]])
RY = sym.Matrix([[sym.cos(b), 0, sym.sin(b)],
[0, 1, 0],
[-sym.sin(b), 0, sym.cos(b)]])
RZ = sym.Matrix([[sym.cos(g), -sym.sin(g), 0],
[sym.sin(g), sym.cos(g), 0],
[0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
R = RY@RX@RZ
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/4
beta = np.pi/4
gamma = np.pi/4
R = R(alpha, beta, gamma)
e1 = np.array([[1, 0, 0]])
e2 = np.array([[0, 1, 0]])
e3 = np.array([[0, 0, 1]])
basis = np.vstack((e1, e2, e3))
basisRot = R@basis
lv, v = np.linalg.eig(R)
axisOfRotation = [np.real(np.squeeze(v[:, np.abs(lv-1) < 1e-6]))]
CCSbasis(Oijk=np.array([0, 0, 0]), Oxyz=np.array([0, 0, 0]), ijk=basis.T,
xyz=basisRot.T, vector=True, point=axisOfRotation);
###Output
_____no_output_____
###Markdown
Computing the angular velocityThe angular velocity $\vec{\boldsymbol\omega}$ is in the direction of the axis of rotation (hence it is parallel to the axis of rotation) and can be described in the basis fixed in the body:\begin{equation} \vec{\boldsymbol{\omega}} = \omega_1\hat{\boldsymbol{e_1}} + \omega_2\hat{\boldsymbol{e_2}} + \omega_3\hat{\boldsymbol{e_3}} \end{equation}So, we must find $\omega_1$, $\omega_2$ and $\omega_3$. First we will express the angular velocity $\vec{\boldsymbol{\omega}}$ in terms of these derivatives. Remember that the angular velocity is described as a vector in the orthogonal plane of the rotation. ($\vec{\boldsymbol{\omega_1}} = \frac{d\theta_1}{dt}\hat{\boldsymbol{e_1}}$, $\vec{\boldsymbol{\omega_2}} = \frac{d\theta_2}{dt}\hat{\boldsymbol{e_2}}$ and $\vec{\boldsymbol{\omega_3}} = \frac{d\theta_3}{dt}\hat{\boldsymbol{e_3}}$). Note also that the derivative of the angle $\theta_1$ can be described as the projection of the vector $\frac{d\hat{\boldsymbol{e_2}}}{dt}$ on the vector $\hat{\boldsymbol{e_3}}$. This can be written by using the scalar product between these vectors: $\frac{d\theta_1}{dt} = \frac{d\hat{\boldsymbol{e_2}}}{dt}\cdot \hat{\boldsymbol{e_3}}$. Similarly, the same is valid for the angles in the other two directions: $\frac{d\theta_2}{dt} = \frac{d\hat{\boldsymbol{e_3}}}{dt}\cdot \hat{\boldsymbol{e_1}}$ and $\frac{d\theta_3}{dt} = \frac{d\hat{\boldsymbol{e_1}}}{dt}\cdot \hat{\boldsymbol{e_2}}$. So, we can write the angular velocity as: \begin{equation} \vec{\boldsymbol{\omega}} = \left(\frac{d\hat{\boldsymbol{e_2}}}{dt}\cdot \hat{\boldsymbol{e_3}}\right) \hat{\boldsymbol{e_1}} + \left(\frac{d\hat{\boldsymbol{e_3}}}{dt}\cdot \hat{\boldsymbol{e_1}}\right) \hat{\boldsymbol{e_2}} + \left(\frac{d\hat{\boldsymbol{e_1}}}{dt}\cdot \hat{\boldsymbol{e_2}}\right) \hat{\boldsymbol{e_3}}\end{equation} Note that the angular velocity $\vec{\boldsymbol\omega}$ is expressed in the reference frame of the object. If you want it described as a linear combination of the versors of the global basis $\hat{\boldsymbol{i}}$, $\hat{\boldsymbol{j}}$ and $\hat{\boldsymbol{k}}$, just multiply the vector $\vec{\boldsymbol\omega}$ by the rotation matrix formed by stacking each versor in a column of the rotation matrix (for a revision on rotation matrices see [this](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation2D.ipynb) and [this](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation3D.ipynb) notebooks). 3D pendulum barAt the file '../data/3Dpendulum.txt' there are 3 seconds of data of 3 points of a three-dimensional cylindrical pendulum. It can move in every direction and has a motor at the upper part of the cylindrical bar producing torques to move the bar. The point m1 is at the upper part of the cylinder and is the origin of the system. The point m2 is at the center of mass of the cylinder. The point m3 is a point at the surface of the cylinder. Below we compute its angular velocity.First we load the file.
###Code
import numpy as np
import matplotlib.pyplot as plt
np.set_printoptions(precision=4, suppress=True)
data = np.loadtxt('../data/3dPendulum.txt', skiprows=1, delimiter = ',')
###Output
_____no_output_____
###Markdown
And separate each mark in a variable.
###Code
t = data[:, 0]
m1 = data[:, 1:4]
m2 = data[:, 4:7]
m3 = data[:, 7:]
dt = t[1]
###Output
_____no_output_____
###Markdown
Now, we form the basis $\hat{\boldsymbol e_1}$, $\hat{\boldsymbol e_2}$ and $\hat{\boldsymbol e_3}$.
###Code
V1 = m2 - m1
e1 = V1 / np.linalg.norm(V1, axis=1, keepdims=True)
V2 = m3-m2
V3 = np.cross(V2, V1)
e2 = V3 / np.linalg.norm(V3, axis=1, keepdims=True)
e3 = np.cross(e1, e2)
###Output
_____no_output_____
###Markdown
Below, we compute the derivative of each of the versors.
###Code
de1dt = np.diff(e1, axis=0) / dt
de2dt = np.diff(e2, axis=0) / dt
de3dt = np.diff(e3, axis=0) / dt
###Output
_____no_output_____
###Markdown
Here we compute each of the components $\omega_1$, $\omega_2$ and $\omega_3$ of the angular velocity $\vec{\boldsymbol \omega}$ by using the scalar product.
###Code
omega1 = np.sum(de2dt*e3[0:-1, :], axis=1).reshape(-1, 1)
omega2 = np.sum(de3dt*e1[0:-1, :], axis=1).reshape(-1, 1)
omega3 = np.sum(de1dt*e2[0:-1, :], axis=1).reshape(-1, 1)
###Output
_____no_output_____
###Markdown
Finally, the angular velocity vector $\vec{\boldsymbol \omega}$ is formed by stacking the three components together.
###Code
omega = np.hstack((omega1, omega2, omega3))
%matplotlib inline
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.plot(t[:-1], omega)
ax.set_xlabel('Time [s]')
ax.set_ylabel('Angular velocity [rad/s]')
ax.legend(labels=['$ω_1$', '$ω_2$', '$ω_3$'])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Angular velocity in 3D movements> Renato Naville Watanabe > Laboratory of Biomechanics and Motor Control ([http://pesquisa.ufabc.edu.br/bmclab](http://pesquisa.ufabc.edu.br/bmclab)) > Federal University of ABC, Brazil Contents1 Axis of rotation2 Computing the angular velocity2.1 1 ) 3D pendulum bar 3 Further reading4 Problems5 References An usual problem found in Biomechanics (and Mechanics in general) is to find the angular velocity of an object. We consider that a basis $\hat{\boldsymbol e_1}$, $\hat{\boldsymbol e_2}$ and $\hat{\boldsymbol e_3}$ is attached to the body and is known. To learn how to find a basis of a frame of reference, see [this notebook](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb). Axis of rotationAs in the planar movement, the angular velocity is a vector perpendicular to the rotation. The line in the direction of the angular velocity vector is known as the axis of rotation. The rotation beween two frames of reference is characterized by the rotation matrix $R$ obtained by stacking the versors $\hat{\boldsymbol e_1}$, $\hat{\boldsymbol e_2}$ and $\hat{\boldsymbol e_3}$ in each column of the matrix (for a revision on rotation matrices see [this](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation2D.ipynb) and [this](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation3D.ipynb) notebooks). A vector in the direction of the axis of rotation is a vector that does not changes the position after the rotation. That is: \begin{equation}v = Rv\end{equation}This vector is the eigenvector of the rotation matrix $R$ with eigenvalue equal to one. Below the yellow arrow indicates the axis of rotation of the rotation between the position of the reference frame $\hat{\boldsymbol i}$, $\hat{\boldsymbol j}$ and $\hat{\boldsymbol k}$ and the reference frame of $\hat{\boldsymbol e_1}$, $\hat{\boldsymbol e_2}$ and $\hat{\boldsymbol e_3}$.
###Code
from IPython.core.display import Math, display
import sympy as sym
import sys
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
import numpy as np
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, sym.cos(a), -sym.sin(a)], [0, sym.sin(a), sym.cos(a)]])
RY = sym.Matrix([[sym.cos(b), 0, sym.sin(b)], [0, 1, 0], [-sym.sin(b), 0, sym.cos(b)]])
RZ = sym.Matrix([[sym.cos(g), -sym.sin(g), 0], [sym.sin(g), sym.cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
R = RY@RX@RZ
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/4
beta = np.pi/4
gamma = np.pi/4
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1,e2,e3))
basisRot = R@basis
lv, v = np.linalg.eig(R)
axisOfRotation = [np.real(np.squeeze(v[:,np.abs(lv-1)<1e-6]))]
CCSbasis(Oijk=np.array([0,0,0]), Oxyz=np.array([0,0,0]), ijk=basis.T, xyz=basisRot.T,
vector=True, point = axisOfRotation)
###Output
_____no_output_____
###Markdown
Computing the angular velocityThe angular velocity $\vec{\boldsymbol\omega}$ is in the direction of the axis of rotation (hence it is parallel to the axis of rotation) and can be described in the basis fixed in the body:\begin{equation} \vec{\boldsymbol{\omega}} = \omega_1\hat{\boldsymbol{e_1}} + \omega_2\hat{\boldsymbol{e_2}} + \omega_3\hat{\boldsymbol{e_3}} \end{equation}So, we must find $\omega_1$, $\omega_2$ and $\omega_3$. First we will express the angular velocity $\vec{\boldsymbol{\omega}}$ in terms of these derivatives. Remember that the angular velocity is described as a vector in the orthogonal plane of the rotation. ($\vec{\boldsymbol{\omega_1}} = \frac{d\theta_1}{dt}\hat{\boldsymbol{e_1}}$, $\vec{\boldsymbol{\omega_2}} = \frac{d\theta_2}{dt}\hat{\boldsymbol{e_2}}$ and $\vec{\boldsymbol{\omega_3}} = \frac{d\theta_3}{dt}\hat{\boldsymbol{e_3}}$). Note also that the derivative of the angle $\theta_1$ can be described as the projection of the vector $\frac{d\hat{\boldsymbol{e_2}}}{dt}$ on the vector $\hat{\boldsymbol{e_3}}$. This can be written by using the scalar product between these vectors: $\frac{d\theta_1}{dt} = \frac{d\hat{\boldsymbol{e_2}}}{dt}\cdot \hat{\boldsymbol{e_3}}$. Similarly, the same is valid for the angles in the other two directions: $\frac{d\theta_2}{dt} = \frac{d\hat{\boldsymbol{e_3}}}{dt}\cdot \hat{\boldsymbol{e_1}}$ and $\frac{d\theta_3}{dt} = \frac{d\hat{\boldsymbol{e_1}}}{dt}\cdot \hat{\boldsymbol{e_2}}$. So, we can write the angular velocity as: \begin{equation} \vec{\boldsymbol{\omega}} = \left(\frac{d\hat{\boldsymbol{e_2}}}{dt}\cdot \hat{\boldsymbol{e_3}}\right) \hat{\boldsymbol{e_1}} + \left(\frac{d\hat{\boldsymbol{e_3}}}{dt}\cdot \hat{\boldsymbol{e_1}}\right) \hat{\boldsymbol{e_2}} + \left(\frac{d\hat{\boldsymbol{e_1}}}{dt}\cdot \hat{\boldsymbol{e_2}}\right) \hat{\boldsymbol{e_3}}\end{equation} Note that the angular velocity $\vec{\boldsymbol\omega}$ is expressed in the reference frame of the object. If you want it described as a linear combination of the versors of the global basis $\hat{\boldsymbol{i}}$, $\hat{\boldsymbol{j}}$ and $\hat{\boldsymbol{k}}$, just multiply the vector $\vec{\boldsymbol\omega}$ by the rotation matrix formed by stacking each versor in a column of the rotation matrix (for a revision on rotation matrices see [this](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation2D.ipynb) and [this](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation3D.ipynb) notebooks). 1 ) 3D pendulum barAt the file '../data/3Dpendulum.txt' there are 3 seconds of data of 3 points of a three-dimensional cylindrical pendulum. It can move in every direction and has a motor at the upper part of the cylindrical bar producing torques to move the bar. The point m1 is at the upper part of the cylinder and is the origin of the system. The point m2 is at the center of mass of the cylinder. The point m3 is a point at the surface of the cylinder. Below we compute its angular velocity.First we load the file.
###Code
import numpy as np
import matplotlib.pyplot as plt
np.set_printoptions(precision=4, suppress=True)
%matplotlib notebook
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
data = np.loadtxt('../data/3dPendulum.txt', skiprows=1, delimiter = ',')
###Output
_____no_output_____
###Markdown
And separate each mark in a variable.
###Code
t = data[:,0]
m1 = data[:,1:4]
m2 = data[:,4:7]
m3 = data[:,7:]
dt = t[1]
###Output
_____no_output_____
###Markdown
Now, we form the basis $\hat{\boldsymbol e_1}$, $\hat{\boldsymbol e_2}$ and $\hat{\boldsymbol e_3}$.
###Code
V1 = m2 - m1
e1 = V1/np.linalg.norm(V1,axis=1,keepdims=True)
V2 = m3-m2
V3 = np.cross(V2,V1)
e2 = V3/np.linalg.norm(V3,axis=1,keepdims=True)
e3 = np.cross(e1,e2)
###Output
_____no_output_____
###Markdown
Below, we compute the derivative of each of the versors.
###Code
de1dt = np.diff(e1, axis=0)/dt
de2dt = np.diff(e2, axis=0)/dt
de3dt = np.diff(e3, axis=0)/dt
###Output
_____no_output_____
###Markdown
Here we compute each of the components $\omega_1$, $\omega_2$ and $\omega_3$ of the angular velocity $\vec{\boldsymbol \omega}$ by using the scalar product.
###Code
omega1 = np.sum(de2dt*e3[0:-1,:], axis = 1).reshape(-1,1)
omega2 = np.sum(de3dt*e1[0:-1,:], axis = 1).reshape(-1,1)
omega3 = np.sum(de1dt*e2[0:-1,:], axis = 1).reshape(-1,1)
###Output
_____no_output_____
###Markdown
Finally, the angular velocity vector $\vec{\boldsymbol \omega}$ is formed by stacking the three components together.
###Code
omega = np.hstack((omega1, omega2, omega3))
###Output
_____no_output_____
###Markdown
Angular velocity in 3D movements> Renato Naville Watanabe > Laboratory of Biomechanics and Motor Control ([http://pesquisa.ufabc.edu.br/bmclab](http://pesquisa.ufabc.edu.br/bmclab)) > Federal University of ABC, Brazil An usual problem found in Biomechanics (and Mechanics in general) is to find the angular velocity of an object. We consider that a basis $\hat{\boldsymbol e_1}$, $\hat{\boldsymbol e_2}$ and $\hat{\boldsymbol e_3}$ is attached to the body and is known. To learn how to find a basis of a frame of reference, see [this notebook](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb). Axis of rotationAs in the planar movement, the angular velocity is a vector perpendicular to the rotation. The line in the direction of the angular velocity vector is known as the axis of rotation. The rotation beween two frames of reference is characterized by the rotation matrix $R$ obtained by stacking the versors $\hat{\boldsymbol e_1}$, $\hat{\boldsymbol e_2}$ and $\hat{\boldsymbol e_3}$ in each column of the matrix (for a revision on rotation matrices see [this](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation2D.ipynb) and [this](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation3D.ipynb) notebooks). A vector in the direction of the axis of rotation is a vector that does not changes the position after the rotation. That is: \begin{equation}v = Rv\end{equation}This vector is the eigenvector of the rotation matrix $R$ with eigenvalue equal to one. Below the yellow arrow indicates the axis of rotation of the rotation between the position of the reference frame $\hat{\boldsymbol i}$, $\hat{\boldsymbol j}$ and $\hat{\boldsymbol k}$ and the reference frame of $\hat{\boldsymbol e_1}$, $\hat{\boldsymbol e_2}$ and $\hat{\boldsymbol e_3}$.
###Code
from IPython.core.display import Math, display
import sympy as sym
import sys
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
import numpy as np
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, sym.cos(a), -sym.sin(a)], [0, sym.sin(a), sym.cos(a)]])
RY = sym.Matrix([[sym.cos(b), 0, sym.sin(b)], [0, 1, 0], [-sym.sin(b), 0, sym.cos(b)]])
RZ = sym.Matrix([[sym.cos(g), -sym.sin(g), 0], [sym.sin(g), sym.cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
R = RY@RX@RZ
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/4
beta = np.pi/4
gamma = np.pi/4
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1,e2,e3))
basisRot = R@basis
lv, v = np.linalg.eig(R)
axisOfRotation = [np.real(np.squeeze(v[:,np.abs(lv-1)<1e-6]))]
CCSbasis(Oijk=np.array([0,0,0]), Oxyz=np.array([0,0,0]), ijk=basis.T, xyz=basisRot.T,
vector=True, point = axisOfRotation)
###Output
_____no_output_____
###Markdown
Computing the angular velocityThe angular velocity $\vec{\boldsymbol\omega}$ is in the direction of the axis of rotation (hence it is parallel to the axis of rotation) and can be described in the basis fixed in the body:\begin{equation} \vec{\boldsymbol{\omega}} = \omega_1\hat{\boldsymbol{e_1}} + \omega_2\hat{\boldsymbol{e_2}} + \omega_3\hat{\boldsymbol{e_3}} \end{equation}So, we must find $\omega_1$, $\omega_2$ and $\omega_3$. First we will express the angular velocity $\vec{\boldsymbol{\omega}}$ in terms of these derivatives. Remember that the angular velocity is described as a vector in the orthogonal plane of the rotation. ($\vec{\boldsymbol{\omega_1}} = \frac{d\theta_1}{dt}\hat{\boldsymbol{e_1}}$, $\vec{\boldsymbol{\omega_2}} = \frac{d\theta_2}{dt}\hat{\boldsymbol{e_2}}$ and $\vec{\boldsymbol{\omega_3}} = \frac{d\theta_3}{dt}\hat{\boldsymbol{e_3}}$). Note also that the derivative of the angle $\theta_1$ can be described as the projection of the vector $\frac{d\hat{\boldsymbol{e_2}}}{dt}$ on the vector $\hat{\boldsymbol{e_3}}$. This can be written by using the scalar product between these vectors: $\frac{d\theta_1}{dt} = \frac{d\hat{\boldsymbol{e_2}}}{dt}\cdot \hat{\boldsymbol{e_3}}$. Similarly, the same is valid for the angles in the other two directions: $\frac{d\theta_2}{dt} = \frac{d\hat{\boldsymbol{e_3}}}{dt}\cdot \hat{\boldsymbol{e_1}}$ and $\frac{d\theta_3}{dt} = \frac{d\hat{\boldsymbol{e_1}}}{dt}\cdot \hat{\boldsymbol{e_2}}$. So, we can write the angular velocity as: \begin{equation} \vec{\boldsymbol{\omega}} = \left(\frac{d\hat{\boldsymbol{e_2}}}{dt}\cdot \hat{\boldsymbol{e_3}}\right) \hat{\boldsymbol{e_1}} + \left(\frac{d\hat{\boldsymbol{e_3}}}{dt}\cdot \hat{\boldsymbol{e_1}}\right) \hat{\boldsymbol{e_2}} + \left(\frac{d\hat{\boldsymbol{e_1}}}{dt}\cdot \hat{\boldsymbol{e_2}}\right) \hat{\boldsymbol{e_3}}\end{equation} Note that the angular velocity $\vec{\boldsymbol\omega}$ is expressed in the reference frame of the object. If you want it described as a linear combination of the versors of the global basis $\hat{\boldsymbol{i}}$, $\hat{\boldsymbol{j}}$ and $\hat{\boldsymbol{k}}$, just multiply the vector $\vec{\boldsymbol\omega}$ by the rotation matrix formed by stacking each versor in a column of the rotation matrix (for a revision on rotation matrices see [this](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation2D.ipynb) and [this](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation3D.ipynb) notebooks). 1 ) 3D pendulum barAt the file '../data/3Dpendulum.txt' there are 3 seconds of data of 3 points of a three-dimensional cylindrical pendulum. It can move in every direction and has a motor at the upper part of the cylindrical bar producing torques to move the bar. The point m1 is at the upper part of the cylinder and is the origin of the system. The point m2 is at the center of mass of the cylinder. The point m3 is a point at the surface of the cylinder. Below we compute its angular velocity.First we load the file.
###Code
import numpy as np
import matplotlib.pyplot as plt
np.set_printoptions(precision=4, suppress=True)
%matplotlib notebook
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
data = np.loadtxt('../data/3dPendulum.txt', skiprows=1, delimiter = ',')
###Output
_____no_output_____
###Markdown
And separate each mark in a variable.
###Code
t = data[:,0]
m1 = data[:,1:4]
m2 = data[:,4:7]
m3 = data[:,7:]
dt = t[1]
###Output
_____no_output_____
###Markdown
Now, we form the basis $\hat{\boldsymbol e_1}$, $\hat{\boldsymbol e_2}$ and $\hat{\boldsymbol e_3}$.
###Code
V1 = m2 - m1
e1 = V1/np.linalg.norm(V1,axis=1,keepdims=True)
V2 = m3-m2
V3 = np.cross(V2,V1)
e2 = V3/np.linalg.norm(V3,axis=1,keepdims=True)
e3 = np.cross(e1,e2)
###Output
_____no_output_____
###Markdown
Below, we compute the derivative of each of the versors.
###Code
de1dt = np.diff(e1, axis=0)/dt
de2dt = np.diff(e2, axis=0)/dt
de3dt = np.diff(e3, axis=0)/dt
###Output
_____no_output_____
###Markdown
Here we compute each of the components $\omega_1$, $\omega_2$ and $\omega_3$ of the angular velocity $\vec{\boldsymbol \omega}$ by using the scalar product.
###Code
omega1 = np.sum(de2dt*e3[0:-1,:], axis = 1).reshape(-1,1)
omega2 = np.sum(de3dt*e1[0:-1,:], axis = 1).reshape(-1,1)
omega3 = np.sum(de1dt*e2[0:-1,:], axis = 1).reshape(-1,1)
###Output
_____no_output_____
###Markdown
Finally, the angular velocity vector $\vec{\boldsymbol \omega}$ is formed by stacking the three components together.
###Code
omega = np.hstack((omega1, omega2, omega3))
###Output
_____no_output_____
|
datathon-nyc/parquet_pandas_stonewall_cc_vm.ipynb
|
###Markdown
Working with Archives Unleashed Parquet DerivativesIn this notebook, we'll setup an environment, then download a dataset of web archive collection derivatives that were produced with the [Archives Unleashed Toolkit](https://github.com/archivesunleashed/aut/). These derivatives are in the [Apache Parquet](https://parquet.apache.org/) format, which is a [columnar storage](http://en.wikipedia.org/wiki/Column-oriented_DBMS) format. These derivatives are generally small enough to work with on your local machine, and can be easily converted to Pandas DataFrames as demonstrated below.This notebook is useful for exploring the following derivatives. **[Binary Analysis](https://github.com/archivesunleashed/aut-docs/blob/master/current/binary-analysis.mdbinary-analysis)**- [Audio](https://github.com/archivesunleashed/aut-docs/blob/master/current/binary-analysis.mdextract-audio-information)- [Images](https://github.com/archivesunleashed/aut-docs/blob/master/current/binary-analysis.mdextract-image-information)- [PDFs](https://github.com/archivesunleashed/aut-docs/blob/master/current/binary-analysis.mdextract-pdf-information)- [Presentation program files](https://github.com/archivesunleashed/aut-docs/blob/master/current/binary-analysis.mdextract-presentation-program-files-information)- [Spreadsheets](https://github.com/archivesunleashed/aut-docs/blob/master/current/binary-analysis.mdextract-spreadsheet-information)- [Text files](https://github.com/archivesunleashed/aut-docs/blob/master/current/binary-analysis.mdextract-text-files-information)- [Word processor files](https://github.com/archivesunleashed/aut-docs/blob/master/current/binary-analysis.mdextract-word-processor-files-information)**Web Pages**`.webpages().select($"crawl_date", $"url", $"mime_type_web_server", $"mime_type_tika", RemoveHTMLDF(RemoveHTTPHeaderDF(($"content"))).alias("content"))` Produces a DataFrame with the following columns: - `crawl_date` - `url` - `mime_type_web_server` - `mime_type_tika` - `content`As the `webpages` derivative is especially rich - it contains the full text of all webpages - we have a separate notebook for [text analysis](https://github.com/archivesunleashed/notebooks/blob/master/parquet_text_analyis.ipynb) here.**Web Graph**`.webgraph()` Produces a DataFrame with the following columns: - `crawl_date` - `src` - `dest` - `anchor`**Image Links**`.imageLinks()`Produces a DataFrame with the following columns: - `src` - `image_url`**Domains**`.webpages().groupBy(ExtractDomainDF($"url").alias("url")).count().sort($"count".desc)`Produces a DataFrame with the following columns:- domain- countWe recommend running through the notebook with the provided sample dataset. You may then want to substitute it with your own dataset. DatasetWeb archive derivatives of the [Stonewall 50 Commemoration collection](https://archive-it.org/collections/12143) from [Columbia University Libraries](https://archive-it.org/home/Columbia). The derivatives were created with the [Archives Unleashed Toolkit](https://github.com/archivesunleashed/aut/) and [Archives Unleashed Cloud](https://cloud.archivesunleashed.org/).[](https://doi.org/10.5281/zenodo.3631347)Curious about the size the derivative Parquet output compared to the size of the web archive collection?The total size of all 11 Parquet deriatives is 2.2G, with `webpages` being the largest (1.5G) since it has a column with full text (`content`).```16K parquet/presentation-program-files1.5G parquet/webpages16K parquet/spreadsheet784K parquet/pdf24K parquet/word-processor2.4M parquet/text-files105M parquet/image180M parquet/imagelinks1.7M parquet/audio433M parquet/webgraph308K parquet/domains2.2G parquet/```The total size of the web archive collection is 128G.The following command downloads all of the parquets file from the Zenodo data repository. To run a 'cell,' you can click the play button next to the cell or you can press your shift key and enter key at the same time.Whenever you see code snippets like this, you should do the same thing to run it. The following command provides a list of all the downloaded parquet files. You should see a list of all the different derivatives here - note that they line up with the list provided at the beginning of this notebook.
###Code
!ls -1 cul-12143-parquet
###Output
audio
domains
image
imagelinks
pdf
presentation-program-files
spreadsheet
text-files
webgraph
webpages
word-processor
###Markdown
EnvironmentNext, we'll setup our environment so we can work with the Parquet output with [Pandas](https://pandas.pydata.org).
###Code
import numpy as np
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Loading our Archives Unleashed Datasets as DataFramesNext, we'll load up our datasets to work with and show a preview of each. We'll load the network, domains, web graph, and images. The remainder of the binary datasets (audio, video, spreadsheets, etc.) will all follow the same pattern as the images dataset, except that they do not have the height and width columns. A useful exercise when trying to learn how to use this would be to swap out images for audio, for example, and see how you can explore these other file types.We've provided a [separate notebook](https://github.com/archivesunleashed/notebooks/blob/master/parquet_text_analyis.ipynb) to work with the pages dataset because it tends to be resource intensive. ImagesThe following commands create a variable called `images` that contain a DataFrame with all of the image information from the web archive. **Reminder:** If you want to look at a differnt derivative, you can, for instance, replace out `images` for `audio`.
###Code
images_parquet = pq.read_table('cul-12143-parquet/image')
images = images_parquet.to_pandas()
images
###Output
/home/ubuntu/anaconda3/lib/python3.7/site-packages/pyarrow/pandas_compat.py:752: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
labels, = index.labels
###Markdown
Web GraphThe next data that we will explore will be the "web graph." This is a DataFrame containing all the hyperlinks within a collection - from `src` (or the page that _contains_ the link) to `dest` (or the page that the link is linking _to_). It also includes the date when this link was crawled, as well as the `anchor` text (what the user clicks on to visit).
###Code
webgraph_parquet = pq.read_table('cul-12143-parquet/webgraph')
webgraph = webgraph_parquet.to_pandas()
webgraph
###Output
_____no_output_____
###Markdown
DomainsThis derivative contains basic information about what's been collected in the crawl. Specifically we can analyze how often pages from each domain appear.
###Code
domains_parquet = pq.read_table('cul-12143-parquet/domains')
domains = domains_parquet.to_pandas()
domains
###Output
_____no_output_____
###Markdown
Data AnalysisNow that we have all of our datasets loaded up, we can begin to work with them! Counting total files, and unique files Count number of rows (how many images are in the web archive collection).
###Code
images.count()
###Output
_____no_output_____
###Markdown
How many unique images are in the collection? We can see if an image is unique or not by computing an [MD5 hash](https://en.wikipedia.org/wiki/MD5MD5_hashes) of it. The exact same image might be called `example.jpg` and `foo.jpg` - by computing the hash, we can see that even with different file names, they are actually the same image!
###Code
len(images.md5.unique())
###Output
_____no_output_____
###Markdown
What are the top 10 most occurring images in the collection?Here we discover which image (or images) occur most frequently.
###Code
images['md5'].value_counts().head(10)
###Output
_____no_output_____
###Markdown
What's the information around all of the occurances of `b798f4ce7359fd815df4bdf76503b295`?What, you mean you don't know what `b798f4ce7359fd815df4bdf76503b295` means? Let's find those images in the DataFrame table - we can here see the real file name (`erosion.jpg`) and more importantly, its URL within the web archive.
###Code
images.loc[images['md5'] == 'b798f4ce7359fd815df4bdf76503b295']
###Output
_____no_output_____
###Markdown
What does `b798f4ce7359fd815df4bdf76503b295` look like?We can extract the binary from the web archive using our [binary extraction functions](https://github.com/archivesunleashed/aut-docs-new/blob/master/current/image-analysis.mdscala-df).```scalaimport io.archivesunleashed._import io.archivesunleashed.df._val df = RecordLoader .loadArchives("example.arc.gz", sc) .extractImageDetailsDF();df.select($"bytes", $"extension") .saveToDisk("bytes", "/path/to/export/directory/your-preferred-filename-prefix", $"extension")```**But**, since we don't have access to the WARC files here, just the Parquet derivatives, we can make do by trying to display a live web version of the image or a replay URL. In this case, BANQ's replay service is available at [https://waext.banq.qc.ca](https://waext.banq.qc.ca).
###Code
pd.options.display.max_colwidth = -1
one_image = images.loc[images['md5'] == 'b798f4ce7359fd815df4bdf76503b295'].head(1)
one_image['url']
###Output
_____no_output_____
###Markdown
 Oh. Surprise, surprise. The most popular image is a 1-pixel image that [Facebook uses to track users for conversion](https://developers.facebook.com/docs/facebook-pixel/implementation/conversion-tracking). What are the top 10 most occuring filenames in the collection?Note that this is of course different than the MD5 results up above. Here we are focusing _just_ on filename. So `carte-p.jpg` for example, might actually be referring to different images who happen to have the same name.
###Code
top_filenames = images['filename'].value_counts().head(10)
top_filenames
###Output
_____no_output_____
###Markdown
Let's plot it!
###Code
top_filenames_chart = top_filenames.plot.bar(figsize=(25,10))
top_filenames_chart.set_title("Top Filenames", fontsize=22)
top_filenames_chart.set_xlabel("Filename", fontsize=20)
top_filenames_chart.set_ylabel("Count", fontsize=20)
###Output
_____no_output_____
###Markdown
How about a MIME type distribution?What _kind_ of image files are present? We can discover this by checking their "media type", or [MIME type](https://en.wikipedia.org/wiki/Media_type).
###Code
image_mime_types = images['mime_type_tika'].value_counts().head(5)
image_mime_type_chart = image_mime_types.plot.bar(figsize=(20,10))
image_mime_type_chart.set_title("Images MIME Type Distribution", fontsize=22)
image_mime_type_chart.set_xlabel("MIME Type", fontsize=20)
image_mime_type_chart.set_ylabel("Count", fontsize=20)
###Output
_____no_output_____
###Markdown
How about the distribution of the top 10 domains?Here we can see which domains are the most frequent within the web archive.
###Code
top_domains = domains.sort_values('count', ascending=False).head(10)
top_domains_chart = top_domains.plot.bar(x='url', y='count', figsize=(25,13))
top_domains_chart.set_title("Domains Distribution", fontsize=22)
top_domains_chart.set_xlabel("Domain", fontsize=20)
top_domains_chart.set_ylabel("Count", fontsize=20)
###Output
_____no_output_____
###Markdown
Top Level Domain AnalysisNow let's create a new column, `tld`, which is based off an existing column, 'Domain'. This example should give you an idea of how you can expand these datasets to do further research and analysis. A [top-level domain](https://en.wikipedia.org/wiki/Top-level_domain) refers to the highest domain in an address - i.e. `.ca`, `.com`, `.org`, or yes, even `.pizza`.Things get a bit complicated, however, in some national TLDs. While `qc.ca` (the domain for Quebec) isn't really a top-level domain, it has many of the features of one as people can directly register under it. Below, we'll use the command `suffix` to include this. > You can learn more about suffixes at https://publicsuffix.org.We'll take the `Domain` column and extract the `tld` from it with [`tldextract`](https://github.com/john-kurkowski/tldextract).First we'll add the [`tldextract`](https://github.com/john-kurkowski/tldextract) library to the notebook. Then, we'll create the new column.
###Code
import tldextract
domains['tld'] = domains.apply(lambda row: tldextract.extract(row.url).domain, axis=1)
domains
###Output
_____no_output_____
###Markdown
Next, let's count the distict TLDs.
###Code
tld_count = domains['tld'].value_counts()
tld_count
###Output
_____no_output_____
###Markdown
Next, we'll plot the TLD count.
###Code
tld_chart = tld_count.head(20).plot.bar(legend=None, figsize=(25,10))
tld_chart.set_xlabel("TLD", fontsize=20)
tld_chart.set_ylabel("Count", fontsize=20)
tld_chart.set_title("Top Level Domain Distribution", fontsize=22)
###Output
_____no_output_____
###Markdown
Examining the Web Graph Remember the hyperlink web graph? Let's look at the web graph columns again.
###Code
webgraph
###Output
_____no_output_____
###Markdown
What are the most frequent crawl dates?
###Code
crawl_dates = webgraph['crawl_date'].value_counts()
crawl_dates
crawl_dates_chart = crawl_dates.plot.line(figsize=(25,12))
crawl_dates_chart.set_xlabel("Crawl Date", fontsize=20)
crawl_dates_chart.set_ylabel("Count", fontsize=20)
crawl_dates_chart.set_title("Crawl Date Frequency", fontsize=22)
###Output
_____no_output_____
|
Notebooks/diabClassifier.ipynb
|
###Markdown
we have a very unbalanced data so we have to keep that in mind for the training process
###Code
# Age and BMI DISTRIBUTION
sns.FacetGrid(data,hue="diab",size=5).map(sns.distplot,"ageq3")
sns.FacetGrid(data,hue="diab",size=5).map(sns.distplot,"aphyq3").add_legend()
sns.boxplot(data=data,y="ageq3",x="diab")
# Type of diet impact when there is a history in the family
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10, 6))
sns.boxplot(data=data,y="pattern_western",x="diab",hue='ATCDfamdiabQ8',ax=axes[0])
sns.boxplot(data=data,y="pattern_prudent",x="diab",hue='ATCDfamdiabQ8',ax=axes[1])
fig.tight_layout()
sns.FacetGrid(data,hue="diab",size=5).map(sns.distplot,"pattern_prudent").add_legend()
sns.FacetGrid(data,hue="diab",size=5).map(sns.distplot,"pattern_western").add_legend()
# Deal with colinearity:
corr= data.corr()
cmap=sns.diverging_palette(5, 250, as_cmap=True)
def magnify():
return [dict(selector="th",
props=[("font-size", "7pt")]),
dict(selector="td",
props=[('padding', "0em 0em")]),
dict(selector="th:hover",
props=[("font-size", "12pt")]),
dict(selector="tr:hover td:hover",
props=[('max-width', '200px'),
('font-size', '12pt')])
]
corr.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '10pt'})\
.set_caption("Hover to magify")\
.set_precision(2)\
.set_table_styles(magnify())
### let's delete hypolipi2 sommeil kcalsac
data.drop(columns=["hypolipi2","sommeil", "kcalsac"],inplace=True)
# transform étude
sns.displot(data[data.diab==1], x="etude", hue="diab")
data['etude'] = pd.cut(data['etude'],3, labels=[1,2,3])
data['etude'] = data['etude'].astype("float")
# Find NUMERICAL and ORIDINAL/CATEGORICAL columns
cat_ord_cols = list(data.columns[data.nunique()<=9])
#cat_ord_cols.remove("etude")
cat_ord_cols.remove("diab")
num_cols = list(data.columns[data.nunique()>9])+["etude"]
##
for col in data.columns:
if col in cat_ord_cols :
data[col].fillna(9,inplace=True)
if col in num_cols :
data[col].fillna(data[col].median(),inplace=True)
# One hot encode
cat_cols = list(data.columns[data.nunique()<=3])
cat_cols.remove("diab")
for col in cat_cols:
one_hot = pd.get_dummies(data[col],prefix=col)
# Drop column B as it is now encoded
data = data.drop(col,axis = 1)
# Join the encoded df
data = data.join(one_hot)
data.head()
# Train and test
X_app,X_test,y_app,y_test = train_test_split(data.drop(columns=["diab"]), data['diab'],stratify=data['diab'], test_size=0.20,random_state=1)
print(y_app.value_counts())
print(y_test.value_counts())
# REG
scaler= prep.StandardScaler()
# Fit on train and apply it
X_train = scaler.fit_transform(X_app,y_train)
# Apply on test
X_test = scaler.fit_transform(X_test)
# First baseline model
LR = LogisticRegression()
LR.fit(X_train,y_train)
# Predict
predictions = LR.predict(X_test)
# Use score method to get accuracy of model
score = LR.score(X_test, y_test)
print("Model's accuracy :",score)
cm = metrics.confusion_matrix(y_test, predictions)
print(metrics.classification_report(y_test, predictions))
plt.figure(figsize=(4,4))
sns.heatmap(cm, annot=True, fmt=".0f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score: {0}'.format(score)
plt.title(all_sample_title, size = 15);
###Output
_____no_output_____
###Markdown
Ensemble methods: Bagging: Random forest Bosting: XGBOOST
###Code
RF = RandomForestClassifier()
RF.fit(X_train,y_train)
# Predict
predictions = RF.predict(X_test)
# Use score method to get accuracy of model
score = RF.score(X_test, y_test)
print("Model's accuracy :",score)
cm = metrics.confusion_matrix(y_test, predictions)
print(metrics.classification_report(y_test, predictions))
XG = xgb.XGBClassifier(use_label_encoder=False, eval_metric='mlogloss')
XG.fit(X_train,y_train)
# Predict
predictions = XG.predict(X_test)
# Use score method to get accuracy of model
score = XG.score(X_test, y_test)
print("Model's accuracy :",score)
cm = metrics.confusion_matrix(y_test, predictions)
print(metrics.classification_report(y_test, predictions))
###Output
Model's accuracy : 0.9475236657977775
precision recall f1-score support
0 0.95 1.00 0.97 13821
1 0.45 0.05 0.09 757
accuracy 0.95 14578
macro avg 0.70 0.52 0.53 14578
weighted avg 0.92 0.95 0.93 14578
###Markdown
Undersampling
###Code
# Undersample
# Create an undersampler object
rus = RandomUnderSampler()
X_train, y_train= rus.fit_resample(X_app, y_app)
y_train.value_counts()
from imblearn.over_sampling import SMOTE
### oversample :
sm = SMOTE()
X_train, y_train=sm.fit_resample(X_app, y_app)
y_train.value_counts()
X_app
# First baseline model
LR = LogisticRegression()
LR.fit(X_train,y_train)
# Predict
predictions = LR.predict(X_test)
# Use score method to get accuracy of model
score = LR.score(X_test, y_test)
print("Model's accuracy :",score)
cm = metrics.confusion_matrix(y_test, predictions)
print(metrics.classification_report(y_test, predictions))
plt.figure(figsize=(4,4))
sns.heatmap(cm, annot=True, fmt=".0f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score: {0}'.format(score)
plt.title(all_sample_title, size = 15);
rfc = RandomForestClassifier(n_jobs=-1,max_features= 'sqrt' ,n_estimators=50, oob_score = True)
param_grid = {
'n_estimators': [200, 700],
'max_features': ['auto', 'sqrt', 'log2']
}
RF = GridSearchCV(estimator=rfc, param_grid=param_grid, cv= 5,verbose =100 )
RF.fit(X_train,y_train)
# Predict
predictions = RF.predict(X_test)
# Use score method to get accuracy of model
score = RF.score(X_test, y_test)
print("Model's accuracy :",score)
cm = metrics.confusion_matrix(y_test, predictions)
print(metrics.classification_report(y_test, predictions))
cm = metrics.confusion_matrix(y_test, predictions)
print(metrics.classification_report(y_test, predictions))
plt.figure(figsize=(4,4))
sns.heatmap(cm, annot=True, fmt=".0f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score: {0}'.format(score)
plt.title(all_sample_title, size = 15);
XG = xgb.XGBClassifier(use_label_encoder=False, eval_metric='mlogloss')
XG.fit(X_train,y_train)
# Predict
predictions = XG.predict(X_test)
# Use score method to get accuracy of model
score = XG.score(X_test, y_test)
print("Model's accuracy :",score)
cm = metrics.confusion_matrix(y_test, predictions)
print(metrics.classification_report(y_test, predictions))
plt.figure(figsize=(4,4))
sns.heatmap(cm, annot=True, fmt=".0f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score: {0}'.format(score)
plt.title(all_sample_title, size = 15);
###Output
Model's accuracy : 0.6869255041843875
precision recall f1-score support
0 0.97 0.69 0.81 13821
1 0.11 0.67 0.18 757
accuracy 0.69 14578
macro avg 0.54 0.68 0.49 14578
weighted avg 0.93 0.69 0.77 14578
|
notebooks/test_temporal_dimension.ipynb
|
###Markdown
Length of sequence: 2
###Code
batch_size = 70
len_sqce = 2
delta_t = 6
description = "all_const_z1000_len{}_delta{}".format(len_sqce, delta_t)
model_filename = model_save_path + "spherical_unet_" + description + ".h5"
pred_filename = pred_save_path + "spherical_unet_" + description + ".nc"
rmse_filename = datadir + 'metrics/rmse_' + description + '.nc'
# Train and validation data
training_ds = WeatherBenchDatasetXarrayHealpixTemp(ds=ds_train, out_features=out_features,
len_sqce=len_sqce, delta_t=delta_t, years=train_years,
nodes=nodes, nb_timesteps=nb_timesteps,
mean=train_mean, std=train_std)
validation_ds = WeatherBenchDatasetXarrayHealpixTemp(ds=ds_valid, out_features=out_features,
len_sqce=len_sqce, delta_t=delta_t, years=val_years,
nodes=nodes, nb_timesteps=nb_timesteps,
mean=train_mean, std=train_std)
dl_train = DataLoader(training_ds, batch_size=batch_size, shuffle=True, num_workers=num_workers,
pin_memory=pin_memory)
dl_val = DataLoader(validation_ds, batch_size=batch_size*2, shuffle=False, num_workers=num_workers,
pin_memory=pin_memory)
# Model
spherical_unet = UNetSphericalHealpix(N=nodes, in_channels=in_features*len_sqce, out_channels=out_features,
kernel_size=3)
spherical_unet, device = init_device(spherical_unet, gpu=gpu)
# Train model
train_loss, val_loss = train_model_2steps_temp(spherical_unet, device, dl_train, epochs=nb_epochs,
lr=learning_rate, validation_data=dl_val,
model_filename=model_filename)
torch.save(spherical_unet.state_dict(), model_filename)
# Show training losses
plt.plot(train_loss, label='Training loss')
plt.plot(val_loss, label='Validation loss')
plt.xlabel('Epochs')
plt.ylabel('MSE Loss')
plt.legend()
plt.show()
del dl_train, dl_val, training_ds, validation_ds
torch.cuda.empty_cache()
'''
# Load optimal model
del spherical_unet
torch.cuda.empty_cache()
optimal_filename = model_filename#[:-3] + '_epoch' + str(np.argmin(val_loss)) + '.h5'
spherical_unet = UNetSphericalHealpix(N=nodes, in_channels=in_features*len_sqce, out_channels=out_features,
kernel_size=3)
spherical_unet, device = init_device(spherical_unet, gpu=gpu)
spherical_unet.load_state_dict(torch.load(optimal_filename), strict=False)'''
# Testing data
testing_ds = WeatherBenchDatasetXarrayHealpixTemp(ds=ds_test, out_features=out_features,
len_sqce=len_sqce, delta_t=delta_t, years=test_years,
nodes=nodes, nb_timesteps=nb_timesteps,
mean=train_mean, std=train_std,
max_lead_time=max_lead_time)
dataloader_test = DataLoader(testing_ds, batch_size=int(0.7*batch_size), shuffle=False,
num_workers=num_workers)
# Compute predictions
preds = create_iterative_predictions_healpix_temp(spherical_unet, device, dataloader_test)
preds.to_netcdf(pred_filename)
# Compute and save RMSE
rmse = compute_rmse_healpix(preds, obs).load()
rmse.to_netcdf(rmse_filename)
# Show RMSE
print('Z500 - 0:', rmse.z.values[0])
print('T850 - 0:', rmse.t.values[0])
plot_rmses(rmse, rmses_weyn, lead_time=6)
del spherical_unet, preds, rmse
torch.cuda.empty_cache()
# Compute predictions
preds = create_iterative_predictions_healpix_temp(spherical_unet, device, dataloader_test)
preds.to_netcdf(pred_filename)
# Compute and save RMSE
rmse = compute_rmse_healpix(preds, obs).load()
rmse.to_netcdf(rmse_filename)
# Show RMSE
print('Z500 - 0:', rmse.z.values[0])
print('T850 - 0:', rmse.t.values[0])
plot_rmses(rmse, rmses_weyn, lead_time=6)
len_sqce = 2
delta_t = 12
batch_size = 70
description = "all_const_z1000_len{}_delta{}".format(len_sqce, delta_t)
model_filename = model_save_path + "spherical_unet_" + description + ".h5"
pred_filename = pred_save_path + "spherical_unet_" + description + ".nc"
rmse_filename = datadir + 'metrics/rmse_' + description + '.nc'
# Train and validation data
training_ds = WeatherBenchDatasetXarrayHealpixTemp(ds=ds_train, out_features=out_features,
len_sqce=len_sqce, delta_t=delta_t, years=train_years,
nodes=nodes, nb_timesteps=nb_timesteps,
mean=train_mean, std=train_std)
validation_ds = WeatherBenchDatasetXarrayHealpixTemp(ds=ds_valid, out_features=out_features,
len_sqce=len_sqce, delta_t=delta_t, years=val_years,
nodes=nodes, nb_timesteps=nb_timesteps,
mean=train_mean, std=train_std)
dl_train = DataLoader(training_ds, batch_size=batch_size, shuffle=True, num_workers=num_workers,
pin_memory=pin_memory)
dl_val = DataLoader(validation_ds, batch_size=batch_size*2, shuffle=False, num_workers=num_workers,
pin_memory=pin_memory)
# Model
spherical_unet = UNetSphericalHealpix(N=nodes, in_channels=in_features*len_sqce, out_channels=out_features,
kernel_size=3)
spherical_unet, device = init_device(spherical_unet, gpu=gpu)
# Train model
train_loss, val_loss = train_model_2steps_temp(spherical_unet, device, dl_train, epochs=nb_epochs,
lr=learning_rate, validation_data=dl_val,
model_filename=model_filename)
torch.save(spherical_unet.state_dict(), model_filename)
# Show training losses
plt.plot(train_loss, label='Training loss')
plt.plot(val_loss, label='Validation loss')
plt.xlabel('Epochs')
plt.ylabel('MSE Loss')
plt.legend()
plt.show()
del dl_train, dl_val, training_ds, validation_ds
torch.cuda.empty_cache()
'''# Load optimal model
del spherical_unet
torch.cuda.empty_cache()
optimal_filename = model_filename[:-3] + '_epoch' + str(np.argmin(val_loss)) + '.h5'
spherical_unet = UNetSphericalHealpix(N=nodes, in_channels=in_features*len_sqce, out_channels=out_features,
kernel_size=3)
spherical_unet, device = init_device(spherical_unet, gpu=gpu)
spherical_unet.load_state_dict(torch.load(optimal_filename), strict=False)'''
# Testing data
testing_ds = WeatherBenchDatasetXarrayHealpixTemp(ds=ds_test, out_features=out_features,
len_sqce=len_sqce, delta_t=delta_t, years=test_years,
nodes=nodes, nb_timesteps=nb_timesteps,
mean=train_mean, std=train_std,
max_lead_time=max_lead_time)
dataloader_test = DataLoader(testing_ds, batch_size=int(0.7*batch_size), shuffle=False,
num_workers=num_workers)
# Compute predictions
preds = create_iterative_predictions_healpix_temp(spherical_unet, device, dataloader_test)
preds.to_netcdf(pred_filename)
# Compute and save RMSE
rmse = compute_rmse_healpix(preds, obs).load()
rmse.to_netcdf(rmse_filename)
# Show RMSE
print('Z500 - 0:', rmse.z.values[0])
print('T850 - 0:', rmse.t.values[0])
f, axs = plt.subplots(1, 2, figsize=(17, 5))
lead_times_ = np.arange(delta_t, max_lead_time + delta_t, delta_t)
lead_times = np.arange(6, max_lead_time + 6, 6)
axs[0].plot(lead_times_, rmse.z.values, label='Spherical')
axs[0].plot(lead_times, rmses_weyn.z.values, label='Weyn 2020')
axs[0].legend()
axs[1].plot(lead_times_, rmse.t.values, label='Spherical')
axs[1].plot(lead_times, rmses_weyn.t.values, label='Weyn 2020')
axs[1].legend()
plt.show()
del spherical_unet, preds, rmse
torch.cuda.empty_cache()
###Output
Loading data into RAM
Loading data into RAM
Epoch: 1/ 20 - loss: 0.109 - val_loss: 0.07488 - time: 1742.675523
Epoch: 2/ 20 - loss: 0.061 - val_loss: 0.06903 - time: 1764.417533
Epoch: 3/ 20 - loss: 0.054 - val_loss: 0.06787 - time: 1759.551396
Epoch: 4/ 20 - loss: 0.051 - val_loss: 0.06718 - time: 1762.941111
Epoch: 5/ 20 - loss: 0.049 - val_loss: 0.06087 - time: 1762.259767
Epoch: 6/ 20 - loss: 0.047 - val_loss: 0.06011 - time: 1764.387260
Epoch: 7/ 20 - loss: 0.046 - val_loss: 0.06761 - time: 1765.929639
Epoch: 8/ 20 - loss: 0.045 - val_loss: 0.06158 - time: 1763.395760
Epoch: 9/ 20 - loss: 0.044 - val_loss: 0.05667 - time: 1765.516890
Epoch: 10/ 20 - loss: 0.044 - val_loss: 0.06011 - time: 1762.467128
Epoch: 11/ 20 - loss: 0.043 - val_loss: 0.05784 - time: 1765.762122
Epoch: 12/ 20 - loss: 0.043 - val_loss: 0.05482 - time: 1764.484922
Epoch: 13/ 20 - loss: 0.042 - val_loss: 0.05664 - time: 1763.333671
Epoch: 14/ 20 - loss: 0.042 - val_loss: 0.05506 - time: 1757.768278
Epoch: 15/ 20 - loss: 0.042 - val_loss: 0.05747 - time: 1756.171202
Epoch: 16/ 20 - loss: 0.041 - val_loss: 0.05494 - time: 1741.264820
Epoch: 17/ 20 - loss: 0.041 - val_loss: 0.05732 - time: 1743.718503
Epoch: 18/ 20 - loss: 0.041 - val_loss: 0.05865 - time: 1744.790711
Epoch: 19/ 20 - loss: 0.041 - val_loss: 0.05483 - time: 1738.929633
Epoch: 20/ 20 - loss: 0.041 - val_loss: 0.06133 - time: 1742.032308
###Markdown
Length of sequence: 4
###Code
batch_size = 100
len_sqce = 4
delta_t = 6
description = "all_const_z1000_len{}_delta{}".format(len_sqce, delta_t)
model_filename = model_save_path + "spherical_unet_" + description + ".h5"
pred_filename = pred_save_path + "spherical_unet_" + description + ".nc"
rmse_filename = datadir + 'metrics/rmse_' + description + '.nc'
# Train and validation data
training_ds = WeatherBenchDatasetXarrayHealpixTemp(ds=ds_train, out_features=out_features,
len_sqce=len_sqce, delta_t=delta_t, years=train_years,
nodes=nodes, nb_timesteps=nb_timesteps,
mean=train_mean, std=train_std)
validation_ds = WeatherBenchDatasetXarrayHealpixTemp(ds=ds_valid, out_features=out_features,
len_sqce=len_sqce, delta_t=delta_t, years=val_years,
nodes=nodes, nb_timesteps=nb_timesteps,
mean=train_mean, std=train_std)
dl_train = DataLoader(training_ds, batch_size=batch_size, shuffle=True, num_workers=num_workers,
pin_memory=pin_memory)
dl_val = DataLoader(validation_ds, batch_size=batch_size*2, shuffle=False, num_workers=num_workers,
pin_memory=pin_memory)
# Model
spherical_unet = UNetSphericalHealpix(N=nodes, in_channels=in_features*len_sqce, out_channels=out_features,
kernel_size=3)
spherical_unet, device = init_device(spherical_unet, gpu=gpu)
# Train model
train_loss, val_loss = train_model_2steps_temp(spherical_unet, device, dl_train, epochs=nb_epochs,
lr=learning_rate, validation_data=dl_val,
model_filename=model_filename)
torch.save(spherical_unet.state_dict(), model_filename)
# Show training losses
plt.plot(train_loss, label='Training loss')
plt.plot(val_loss, label='Validation loss')
plt.xlabel('Epochs')
plt.ylabel('MSE Loss')
plt.legend()
plt.show()
del dl_train, dl_val, training_ds, validation_ds
torch.cuda.empty_cache()
'''# Load optimal model
del spherical_unet
torch.cuda.empty_cache()
optimal_filename = model_filename[:-3] + '_epoch' + str(np.argmin(val_loss)) + '.h5'
spherical_unet = UNetSphericalHealpix(N=nodes, in_channels=in_features*len_sqce, out_channels=out_features,
kernel_size=3)
spherical_unet, device = init_device(spherical_unet, gpu=gpu)
spherical_unet.load_state_dict(torch.load(optimal_filename), strict=False)'''
# Testing data
testing_ds = WeatherBenchDatasetXarrayHealpixTemp(ds=ds_test, out_features=out_features,
len_sqce=len_sqce, delta_t=delta_t, years=test_years,
nodes=nodes, nb_timesteps=nb_timesteps,
mean=train_mean, std=train_std,
max_lead_time=max_lead_time)
dataloader_test = DataLoader(testing_ds, batch_size=int(0.3*batch_size), shuffle=False,
num_workers=num_workers)
# Compute predictions
preds = create_iterative_predictions_healpix_temp(spherical_unet, device, dataloader_test)
preds.to_netcdf(pred_filename)
# Compute and save RMSE
rmse = compute_rmse_healpix(preds, obs).load()
rmse.to_netcdf(rmse_filename)
# Show RMSE
print('Z500 - 0:', rmse.z.values[0])
print('T850 - 0:', rmse.t.values[0])
plot_rmses(rmse, rmses_weyn, lead_time=6)
del spherical_unet, preds, rmse
torch.cuda.empty_cache()
len_sqce = 4
delta_t = 12
description = "all_const_z1000_len{}_delta{}".format(len_sqce, delta_t)
model_filename = model_save_path + "spherical_unet_" + description + ".h5"
pred_filename = pred_save_path + "spherical_unet_" + description + ".nc"
rmse_filename = datadir + 'metrics/rmse_' + description + '.nc'
# Train and validation data
training_ds = WeatherBenchDatasetXarrayHealpixTemp(ds=ds_train, out_features=out_features,
len_sqce=len_sqce, delta_t=delta_t, years=train_years,
nodes=nodes, nb_timesteps=nb_timesteps,
mean=train_mean, std=train_std)
validation_ds = WeatherBenchDatasetXarrayHealpixTemp(ds=ds_valid, out_features=out_features,
len_sqce=len_sqce, delta_t=delta_t, years=val_years,
nodes=nodes, nb_timesteps=nb_timesteps,
mean=train_mean, std=train_std)
dl_train = DataLoader(training_ds, batch_size=batch_size, shuffle=True, num_workers=num_workers,
pin_memory=pin_memory)
dl_val = DataLoader(validation_ds, batch_size=batch_size*2, shuffle=False, num_workers=num_workers,
pin_memory=pin_memory)
# Model
spherical_unet = UNetSphericalHealpix(N=nodes, in_channels=in_features*len_sqce, out_channels=out_features,
kernel_size=3)
spherical_unet, device = init_device(spherical_unet, gpu=gpu)
# Train model
train_loss, val_loss = train_model_2steps_temp(spherical_unet, device, dl_train, epochs=nb_epochs,
lr=learning_rate, validation_data=dl_val,
model_filename=model_filename)
torch.save(spherical_unet.state_dict(), model_filename)
# Show training losses
plt.plot(train_loss, label='Training loss')
plt.plot(val_loss, label='Validation loss')
plt.xlabel('Epochs')
plt.ylabel('MSE Loss')
plt.legend()
plt.show()
del dl_train, dl_val, training_ds, validation_ds
torch.cuda.empty_cache()
'''# Load optimal model
del spherical_unet
torch.cuda.empty_cache()
optimal_filename = model_filename[:-3] + '_epoch' + str(np.argmin(val_loss)) + '.h5'
spherical_unet = UNetSphericalHealpix(N=nodes, in_channels=in_features*len_sqce, out_channels=out_features,
kernel_size=3)
spherical_unet, device = init_device(spherical_unet, gpu=gpu)
spherical_unet.load_state_dict(torch.load(optimal_filename), strict=False)'''
# Testing data
testing_ds = WeatherBenchDatasetXarrayHealpixTemp(ds=ds_test, out_features=out_features,
len_sqce=len_sqce, delta_t=delta_t, years=test_years,
nodes=nodes, nb_timesteps=nb_timesteps,
mean=train_mean, std=train_std,
max_lead_time=max_lead_time)
dataloader_test = DataLoader(testing_ds, batch_size=int(0.7*batch_size), shuffle=False,
num_workers=num_workers)
# Compute predictions
preds = create_iterative_predictions_healpix_temp(spherical_unet, device, dataloader_test)
preds.to_netcdf(pred_filename)
# Compute and save RMSE
rmse = compute_rmse_healpix(preds, obs).load()
rmse.to_netcdf(rmse_filename)
# Show RMSE
print('Z500 - 0:', rmse.z.values[0])
print('T850 - 0:', rmse.t.values[0])
f, axs = plt.subplots(1, 2, figsize=(17, 5))
lead_times_ = np.arange(delta_t, max_lead_time + delta_t, delta_t)
lead_times = np.arange(6, max_lead_time + 6, 6)
axs[0].plot(lead_times_, rmse.z.values, label='Spherical')
axs[0].plot(lead_times, rmses_weyn.z.values, label='Weyn 2020')
axs[0].legend()
axs[1].plot(lead_times_, rmse.t.values, label='Spherical')
axs[1].plot(lead_times, rmses_weyn.t.values, label='Weyn 2020')
axs[1].legend()
plt.show()
del spherical_unet, preds, rmse
torch.cuda.empty_cache()
###Output
Loading data into RAM
Loading data into RAM
Epoch: 1/ 20 - loss: 0.129 - val_loss: 0.09034 - time: 1855.595276
Epoch: 2/ 20 - loss: 0.064 - val_loss: 0.07248 - time: 1856.446625
Epoch: 3/ 20 - loss: 0.056 - val_loss: 0.06647 - time: 1854.087487
Epoch: 4/ 20 - loss: 0.052 - val_loss: 0.06649 - time: 1854.378580
Epoch: 5/ 20 - loss: 0.049 - val_loss: 0.05765 - time: 1853.870775
Epoch: 6/ 20 - loss: 0.047 - val_loss: 0.05957 - time: 1855.225228
Epoch: 7/ 20 - loss: 0.046 - val_loss: 0.05875 - time: 1842.814106
Epoch: 8/ 20 - loss: 0.045 - val_loss: 0.05620 - time: 1842.784569
Epoch: 9/ 20 - loss: 0.044 - val_loss: 0.05718 - time: 1860.227736
Epoch: 10/ 20 - loss: 0.043 - val_loss: 0.05515 - time: 1862.099493
Epoch: 11/ 20 - loss: 0.043 - val_loss: 0.05556 - time: 1864.546319
Epoch: 12/ 20 - loss: 0.042 - val_loss: 0.05499 - time: 1869.212194
Epoch: 13/ 20 - loss: 0.042 - val_loss: 0.05418 - time: 1861.810884
Epoch: 14/ 20 - loss: 0.041 - val_loss: 0.05737 - time: 1861.304696
Epoch: 15/ 20 - loss: 0.041 - val_loss: 0.05335 - time: 1863.744149
Epoch: 16/ 20 - loss: 0.041 - val_loss: 0.06428 - time: 1864.160252
Epoch: 17/ 20 - loss: 0.040 - val_loss: 0.05512 - time: 1863.265510
Epoch: 18/ 20 - loss: 0.040 - val_loss: 0.05496 - time: 1862.210989
Epoch: 19/ 20 - loss: 0.040 - val_loss: 0.05357 - time: 1862.821134
Epoch: 20/ 20 - loss: 0.040 - val_loss: 0.05586 - time: 1864.325143
###Markdown
Comparison
###Code
filename = datadir+'metrics/rmse_all_const_z1000_len{}_delta{}'
rmse_2_6 = xr.open_dataset(filename.format(2, 6) + '.nc')
rmse_2_12 = xr.open_dataset(filename.format(2, 12) + '.nc')
rmse_4_6 = xr.open_dataset(filename.format(4, 6) + '.nc')
rmse_4_12 = xr.open_dataset(filename.format(4, 12) + '.nc')
rmse_1 = xr.open_dataset(datadir+'metrics/rmse_all_const.nc')
lead_times_ = np.arange(12, max_lead_time + 12, 12)
lead_times = np.arange(6, max_lead_time + 6, 6)
f, axs = plt.subplots(1, 2, figsize=(17, 6))
xlabels = [str(t) if t%4 == 0 else '' for t in lead_times]
axs[0].plot(lead_times, rmse_1.z.values, label='$L=1$, $\Delta_t = 6$')
axs[0].plot(lead_times, rmse_2_6.z.values, label='$L=2$, $\Delta_t = 6$')
axs[0].plot(lead_times, rmse_4_6.z.values, label='$L=4$, $\Delta_t = 6$')
axs[0].legend()
axs[1].plot(lead_times, rmse_1.t.values, label='$L=1$, $\Delta_t = 6$')
axs[1].plot(lead_times, rmse_2_6.t.values, label='$L=2$, $\Delta_t = 6$')
axs[1].plot(lead_times, rmse_4_6.t.values, label='$L=4$, $\Delta_t = 6$')
axs[1].legend()
axs[0].set_xticks(lead_times)
axs[1].set_xticks(lead_times)
axs[0].set_xticklabels(xlabels)
axs[1].set_xticklabels(xlabels)
axs[0].tick_params(axis='both', which='major', labelsize=16)
axs[1].tick_params(axis='both', which='major', labelsize=16)
axs[0].set_xlabel('Lead time [h]', fontsize='18')
axs[1].set_xlabel('Lead time [h]', fontsize='18')
axs[0].set_ylabel('RMSE [$m^2 s^{-2}$]', fontsize='18')
axs[1].set_ylabel('RMSE [K]', fontsize='18')
axs[0].set_title('Z500', fontsize='22')
axs[1].set_title('T850', fontsize='22')
axs[0].legend(fontsize=16, loc='upper left')
axs[1].legend(fontsize=16)
plt.tight_layout()
plt.savefig('temporal_rmse_delta6.eps', format='eps', bbox_inches='tight')
plt.show()
f, axs = plt.subplots(1, 2, figsize=(17, 6))
axs[0].plot(lead_times, rmse_2_6.z.values, label='$L=2$, $\Delta_t = 6$')
axs[0].plot(lead_times, rmse_4_6.z.values, label='$L=4$, $\Delta_t = 6$')
axs[0].plot(lead_times_, rmse_2_12.z.values, label='$L=2$, $\Delta_t = 12$')
axs[0].plot(lead_times_, rmse_4_12.z.values, label='$L=4$, $\Delta_t = 12$')
#axs[0].plot(lead_times, rmse_1.z.values, label='Spherical, L=1, delta=6')
#axs[0].plot(lead_times, rmses_weyn.z.values, label='Weyn 2020')
axs[1].plot(lead_times, rmse_2_6.t.values, label='$L=2$, $\Delta_t = 6$')
axs[1].plot(lead_times, rmse_4_6.t.values, label='$L=4$, $\Delta_t = 6$')
axs[1].plot(lead_times_, rmse_2_12.t.values, label='$L=2$, $\Delta_t = 12$')
axs[1].plot(lead_times_, rmse_4_12.t.values, label='$L=4$, $\Delta_t = 12$')
#axs[1].plot(lead_times, rmse_1.t.values, label='Spherical, L=1, delta=6')
#axs[1].plot(lead_times, rmses_weyn.t.values, label='Weyn 2020')
axs[0].set_xticks(lead_times)
axs[1].set_xticks(lead_times)
axs[0].set_xticklabels(xlabels)
axs[1].set_xticklabels(xlabels)
axs[0].tick_params(axis='both', which='major', labelsize=16)
axs[1].tick_params(axis='both', which='major', labelsize=16)
axs[0].set_xlabel('Lead time [h]', fontsize='18')
axs[1].set_xlabel('Lead time [h]', fontsize='18')
axs[0].set_ylabel('RMSE [$m^2 s^{-2}$]', fontsize='18')
axs[1].set_ylabel('RMSE [K]', fontsize='18')
axs[0].set_title('Z500', fontsize='22')
axs[1].set_title('T850', fontsize='22')
axs[0].legend(fontsize=16, loc='upper left')
axs[1].legend(fontsize=16)
plt.tight_layout()
plt.savefig('temporal_rmse_all.eps', format='eps', bbox_inches='tight')
plt.show()
###Output
_____no_output_____
|
nbs/test_concurrent_downloads.ipynb
|
###Markdown
Test Downloads Create Files
###Code
# dont_test
import io
import os
import hashlib
import requests
import pandas as pd
from pathlib import Path
from datetime import datetime
from urllib.parse import urljoin
def calculate_checksum(content):
return hashlib.md5(content).hexdigest()
# dont_test
project_root = Path.cwd() / ".."
data_root = project_root / "example" / "data"
data_root.mkdir(exist_ok=True)
%%time
# dont_test
file_size = 1024 * 1024 * 10
num_to_hash = {}
for num in range(100):
file_path = data_root / str(num)
if file_path.exists():
with file_path.open("rb") as f:
data = f.read()
else:
with file_path.open("wb") as f:
data = os.urandom(file_size)
f.write(data)
num_to_hash[num] = calculate_checksum(data)
###Output
CPU times: user 1.55 s, sys: 177 ms, total: 1.72 s
Wall time: 1.73 s
###Markdown
Plotting Helper
###Code
# dont_test
def df_from_timestamps(timestamps, file_size):
min_started = min([started for started, stopped in timestamps])
max_stopped = max([stopped for started, stopped in timestamps])
index = pd.date_range(start=min_started, end=max_stopped, freq="ms")
df = pd.DataFrame(index=index)
for i, ts in enumerate(timestamps):
start, end = ts
duration = (end - start).total_seconds()
bandwidth = (file_size / duration) / 10 ** 6
column = f"client_{i}"
df.loc[:, column] = 0
df.loc[start:end, column] = bandwidth
return df
def plot_download_df(df):
ax = df.plot(figsize=(10, 6), legend=False, title="Bandwidth used by clients")
ax.set_xlabel("Time")
_ = ax.set_ylabel("MB/s")
###Output
_____no_output_____
###Markdown
Download SynchronouslyStart server with:```shellcd examplegunicorn -w 2 -k uvicorn.workers.UvicornWorker -b :8000 "example.asgi:application"```
###Code
# dont_test
timestamps = []
base_url = "http://localhost:8000/sync/"
for num, expected_hash in num_to_hash.items():
url = urljoin(base_url, f"{num}")
started = datetime.now()
r = requests.get(url)
stopped = datetime.now()
timestamps.append((started, stopped))
r.raise_for_status()
actual_hash = calculate_checksum(r.content)
assert expected_hash == actual_hash
# dont_test
df = df_from_timestamps(timestamps, file_size)
plot_download_df(df)
###Output
_____no_output_____
###Markdown
Downloads Concurrently
###Code
# dont_test
import gevent
from gevent import monkey
monkey.patch_all()
class Response:
def __init__(self, url, content, started, stopped):
self.url = url
self.content = content
self.started = started
self.stopped = stopped
def streaming_fetch(url):
chunks = []
with requests.get(url, stream=True) as r:
r.raise_for_status()
started = datetime.now()
for chunk in r.iter_content(chunk_size=4096):
chunks.append(chunk)
stopped = datetime.now()
response = Response(url, b"".join(chunks), started, stopped)
return response
# dont_test
def make_async_requests(prefix):
urls = []
base_url = f"http://localhost:8000/{prefix}/"
for num, expected_hash in num_to_hash.items():
url = urljoin(base_url, f"{num}")
urls.append(url)
jobs = [gevent.spawn(streaming_fetch, _url) for _url in urls]
responses = gevent.wait(jobs)
responses = [r.value for r in responses]
timestamps = []
for num, response in enumerate(responses):
timestamps.append((response.started, response.stopped))
expected_hash = num_to_hash[num]
actual_hash = calculate_checksum(response.content)
return timestamps
# dont_test
timestamps = make_async_requests("async_filesystem")
df = df_from_timestamps(timestamps, file_size)
plot_download_df(df)
# dont_test
timestamps = make_async_requests("async_minio")
df = df_from_timestamps(timestamps, file_size)
plot_download_df(df)
###Output
_____no_output_____
###Markdown
Create MinIO Objects
###Code
# dont_test
from minio import Minio
from minio.error import S3Error
def get_minio_client_and_bucket(endpoint, params, bucket):
client = Minio(endpoint, **params)
found = client.bucket_exists(bucket)
if not found:
client.make_bucket(bucket)
return client
def checksum_for_minio(client, bucket, key):
try:
response = client.get_object(bucket, key)
data = response.read()
finally:
response.close()
response.release_conn()
return calculate_checksum(data)
def create_file_minio(client, bucket, key, size):
data = os.urandom(size)
result = client.put_object(
bucket,
key,
io.BytesIO(data),
size,
)
return calculate_checksum(data)
# dont_test
endpoint = "127.0.0.1:9000"
params = {
"access_key": "minioadmin",
"secret_key": "minioadmin",
"secure": False,
}
bucket = "fileresponse"
client = get_minio_client_and_bucket(endpoint, params, bucket)
file_size = 1024 * 1024 * 10
num_to_hash = {}
for num in range(100):
key = str(num)
try:
result = client.stat_object(bucket, key)
checksum = checksum_for_minio(client, bucket, key)
except S3Error:
# object does not exist -> create
checksum = create_file_minio(client, bucket, key, file_size)
num_to_hash[num] = checksum
###Output
_____no_output_____
|
gdf_pca/pca_lstm_overview.ipynb
|
###Markdown
Results for PCA+LSTM
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.cluster import KMeans
from sklearn.svm import SVC
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.layers import LSTM
from keras.models import Sequential, model_from_json
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
import warnings
import numpy as np
from collections import OrderedDict
import os
from lob_data_utils import lob, db_result, gdf_pca, model
from lob_data_utils.svm_calculation import lob_svm
sns.set_style('whitegrid')
warnings.filterwarnings('ignore')
data_dir = 'res_lstm/'
if_should_savefig = True # TODO
df_res = pd.DataFrame()
for f in os.listdir(data_dir):
d = pd.read_csv(os.path.join(data_dir, f))
d['filename'] = [f for i in range(len(d))]
# if d['stock'].iloc[0] in [11869, 9268, 4549]:
# continue
df_res = df_res.append(d)
df_res['diff'] = df_res['train_matthews'] - df_res['matthews']
print(df_res['r'].unique(), df_res['s'].unique())
df_log = pd.read_csv('res_log_que.csv')
columns = ['matthews', 'test_matthews', 'stock', 'unit']
df_best = df_res.sort_values(by='matthews', ascending=False).groupby(['stock']).head(1)
df_best = pd.merge(df_best, df_log, on='stock', suffixes=['_lstm', '_log'])
df_best.index = df_best['stock']
df_best[['r', 's', 'unit', 'kernel_reg', 'train_matthews_lstm',
'matthews_lstm', 'test_matthews_lstm', 'test_matthews_log', 'stock', 'filename']]
df_best[['r', 's', 'train_roc_auc_lstm',
'roc_auc_lstm', 'test_roc_auc_lstm', 'test_roc_auc_log', 'stock', 'filename']]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
df_best[['train_matthews_lstm', 'matthews_lstm', 'test_matthews_lstm']].plot(kind='bar', ax=ax1)
ax1.legend(['Train', 'Validation', 'Test'])
ax1.set_title('MCC score for GDF+PCA+LSTM')
df_best[['train_roc_auc_lstm', 'roc_auc_lstm', 'test_roc_auc_lstm']].plot(kind='bar', ax=ax2)
ax2.legend(['Train', 'Validation', 'Test'])
ax2.set_ylim(0.5, 0.7)
ax2.set_title('ROC area score for GDF+PCA+LSTM')
plt.tight_layout()
if if_should_savefig:
plt.savefig('gdf_pca_lstm_mcc_roc_scores_bar.png')
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
df_best[['test_matthews_lstm', 'test_matthews_log']].plot(kind='bar', ax=ax1)
ax1.legend(['GDF+PCA+LSTM', 'QUE+LOG'])
ax1.set_title('MCC score for GDF+PCA+LSTM vs QUE+LOG on test data-set')
df_best[['test_roc_auc_lstm', 'test_roc_auc_log']].plot(kind='bar', ax=ax2)
ax2.legend(['GDF+PCA+LSTM', 'QUE+LOG'])
ax2.set_ylim(0.5, 0.7)
ax2.set_title('ROC area score for GDF+PCA+LSTM vs QUE+LOG on test data-set')
plt.tight_layout()
if if_should_savefig:
plt.savefig('gdf_pca_lstm_que_log_mcc_roc_scores_bar.png')
df_best[['train_matthews_lstm', 'matthews_lstm', 'test_matthews_lstm', 'test_matthews_log']].plot(kind='bar', figsize=(16, 4))
plt.legend(['Train', 'Validation', 'Test', 'QUE+LOG Test'])
df_best[['train_roc_auc_lstm', 'roc_auc_lstm', 'test_roc_auc_lstm', 'test_roc_auc_log']].plot(kind='bar', figsize=(16, 4))
plt.legend(['Train', 'Validation', 'Test', 'QUE+LOG Test'])
plt.ylim(0.5, 0.7)
plt.tight_layout()
print(df_best[['train_matthews_lstm', 'matthews_lstm', 'test_matthews_lstm',
'train_roc_auc_lstm', 'roc_auc_lstm', 'test_roc_auc_lstm']].describe().to_latex())
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
sns.distplot(df_best['train_matthews_lstm'], label='Train', ax=ax1)
sns.distplot(df_best['matthews_lstm'], label='Validation', ax=ax1)
sns.distplot(df_best['test_matthews_lstm'], label='Test', ax=ax1)
ax1.legend(['Train', 'Validation', 'Test'])
ax1.set_title('MCC score distribution for GDF+PCA+LSTM')
ax1.set_xlabel('MCC score')
sns.distplot(df_best['train_roc_auc_lstm'], label='Train', ax=ax2)
sns.distplot(df_best['roc_auc_lstm'], label='Validation', ax=ax2)
sns.distplot(df_best['test_roc_auc_lstm'], label='Test', ax=ax2)
ax2.legend(['Train', 'Validation', 'Test'])
ax2.set_title('ROC area score distribution for GDF+PCA+LSTM')
ax2.set_xlabel('ROC area score')
plt.tight_layout()
if if_should_savefig:
plt.savefig('gdf_pca_lstm_score_dist.png')
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
columns = ['stock', 'matthews_lstm', 'roc_auc_lstm',
'test_matthews_lstm', 'test_roc_auc_lstm', 'train_matthews_lstm', 'train_roc_auc_lstm']
df = df_best[columns].copy()
df.rename(columns={
'matthews_lstm': 'Validation', 'test_matthews_lstm': 'Testing', 'train_matthews_lstm': 'Train'}, inplace=True)
df = df.melt(['stock', 'roc_auc_lstm', 'test_roc_auc_lstm', 'train_roc_auc_lstm'])
sns.violinplot(x="variable", y="value", data=df, ax=ax1)
ax1.set_title('Distribution of MCC scores')
ax1.set_xlabel('Data Set')
ax1.set_ylabel('Score')
df = df_best[columns].copy()
df.rename(columns={'roc_auc_lstm': 'Validation', 'test_roc_auc_lstm': 'Testing', 'train_roc_auc_lstm': 'Train'}, inplace=True)
df = df.melt(['stock', 'matthews_lstm', 'test_matthews_lstm', 'train_matthews_lstm'])
ax2.set_title('Distribution of ROC Area scores')
sns.violinplot(x="variable", y="value", data=df, ax=ax2)
ax2.set_xlabel('Data Set')
ax2.set_ylabel('Score')
plt.tight_layout()
if if_should_savefig:
plt.savefig('violin_distribution_scores_gdf_pca_lstm.png')
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
df_best['pca_components'].plot(kind='bar', color=['b'], alpha=0.5, ax=ax1)
ax1.set_title('Number of PCA components')
ax1.set_ylabel('PCA components')
sns.distplot(df_best['pca_components'], ax=ax2, bins=10, kde=False)
ax2.set_title('PCA components histogram')
ax2.set_ylabel('Count')
ax2.set_xlabel('PCA component')
plt.tight_layout()
if if_should_savefig:
plt.savefig('gdf_pca_lstm_pca_components.png')
plt.scatter(x=df_best['r'], y=df_best['s'], c=df_best['pca_components'])
plt.title('Number of PCA components')
plt.ylabel('PCA components')
plt.legend()
plt.tight_layout()
if if_should_savefig:
plt.savefig('gdf_pca_lstm_pca_components.png')
import json
n_units = []
for i, row in df_best.iterrows():
arch = json.loads(row['arch'])
n_unit = 0
for l in arch['config']['layers']:
n_unit += l['config']['units']
n_units.append(n_unit)
# df_best['total_n_units'] = n_units
# df_best['total_n_units'].plot(kind='bar', color=['b'], alpha=0.5)
# plt.title('Number of LSTM units')
# plt.ylabel('units')
# plt.tight_layout()
# if if_should_savefig:
# plt.savefig('gdf_pca_lstm_units.png')
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
df_best['total_n_units'].plot(kind='bar', color=['b'], alpha=0.5, ax=ax1)
ax1.set_title('Number of LSTM units')
ax1.set_ylabel('LSTM units')
sns.distplot(df_best['total_n_units'], ax=ax2, bins=32, kde=False)
ax2.set_title('Histogram of number of LSTM units')
ax2.set_ylabel('Count')
ax2.set_xlabel('Number of LSTM units')
plt.tight_layout()
if if_should_savefig:
plt.savefig('gdf_pca_lstm_units.png')
r_s_dict = OrderedDict()
r_parameters = [0.01, 0.1]
s_parameters = [0.1, 0.5]
for r in r_parameters:
for s in s_parameters:
r_s_dict['r={}, s={}'.format(r, s)] = df_best[df_best['r'] == r][df_best['s'] == s][
'matthews_lstm'].values
plt.figure(figsize=(16, 8))
ax = sns.boxplot(data=list(r_s_dict.values()))
plt.ylabel('MCC score')
plt.xlabel('Parameters r and s')
_ = ax.set_xticklabels(list(r_s_dict.keys()), rotation=45)
plt.title('MCC score distribution for different r and s parameters for validation set')
###Output
_____no_output_____
###Markdown
Comparision with QUE+LOG
###Code
df_best['diff_test_matthews'] = df_best['test_matthews_lstm'] - df_best['test_matthews_log']
df_best['diff_train_matthews'] = df_best['train_matthews_lstm'] - df_best['train_matthews_log']
df_best['diff_matthews'] = df_best['matthews_lstm'] - df_best['matthews_log']
df_best['diff_test_roc_auc'] = df_best['test_roc_auc_lstm'] - df_best['test_roc_auc_log']
df_best['diff_train_roc_auc'] = df_best['train_roc_auc_lstm'] - df_best['train_roc_auc_log']
df_best['diff_roc_auc'] = df_best['roc_auc_lstm'] - df_best['roc_auc_log']
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 6))
sns.distplot(df_best['diff_train_matthews'], label='Train', ax=ax1)
sns.distplot(df_best['diff_matthews'], label='Validation', ax=ax1)
sns.distplot(df_best['diff_test_matthews'], label='Test', ax=ax1)
ax1.set_title('Dist. plot of differences of MCC score for GDF+PCA+LSTM and QUE+LOG')
ax1.set_xlabel('MCC score')
ax1.legend(['Train', 'Validation', 'Test'])
sns.distplot(df_best['diff_train_roc_auc'], label='Train', ax=ax2)
sns.distplot(df_best['diff_roc_auc'], label='Validation', ax=ax2)
sns.distplot(df_best['diff_test_roc_auc'], label='Test', ax=ax2)
ax2.set_title('Dist.plot of differences of ROC area score for GDF+PCA+LSTM and QUE+LOG')
ax2.legend(['Train', 'Validation', 'Test'])
ax2.set_xlabel('ROC area score')
plt.tight_layout()
if if_should_savefig:
plt.savefig('gdf_pca_lstm_and_que_log_score_diff.png')
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
columns = ['stock', 'diff_matthews', 'diff_roc_auc',
'diff_test_matthews', 'diff_test_roc_auc', 'diff_train_matthews', 'diff_train_roc_auc']
df = df_best[columns].copy()
df.rename(columns={
'diff_matthews': 'Validation', 'diff_test_matthews': 'Testing', 'diff_train_matthews': 'Train'}, inplace=True)
df = df.melt(['stock', 'diff_roc_auc', 'diff_test_roc_auc', 'diff_train_roc_auc'])
sns.violinplot(x="variable", y="value", data=df, ax=ax1)
ax1.set_title('Distribution of differences of MCC scores')
ax1.set_xlabel('Data Set')
ax1.set_ylabel('Score')
df = df_best[columns].copy()
df.rename(
columns={'diff_roc_auc': 'Validation', 'diff_test_roc_auc': 'Testing', 'diff_train_roc_auc': 'Train'},
inplace=True)
df = df.melt(['stock', 'diff_matthews', 'diff_test_matthews', 'diff_train_matthews'])
ax2.set_title('Distribution of differences of ROC Area scores')
sns.violinplot(x="variable", y="value", data=df, ax=ax2)
ax2.set_xlabel('Data Set')
ax2.set_ylabel('Score')
plt.tight_layout()
if if_should_savefig:
plt.savefig('gdf_pca_lstm_and_que_log_violin_score_diff.png')
bad = df_best[df_best['test_matthews_lstm'] < df_best['test_matthews_log']]['stock'].values
df_best[['diff_train_matthews', 'diff_matthews', 'diff_test_matthews',
'diff_train_roc_auc', 'diff_roc_auc', 'diff_test_roc_auc']][df_best['stock'].isin(bad)]
df_best[['diff_train_matthews', 'diff_matthews', 'diff_test_matthews',
'diff_train_roc_auc', 'diff_roc_auc', 'diff_test_roc_auc']][df_best['stock'].isin(bad)].describe()
df_best[['diff_train_matthews', 'diff_matthews', 'diff_test_matthews',
'diff_train_roc_auc', 'diff_roc_auc', 'diff_test_roc_auc']].describe()
print(df_best[['diff_train_matthews', 'diff_matthews', 'diff_test_matthews',
'diff_train_roc_auc', 'diff_roc_auc', 'diff_test_roc_auc']].describe().to_latex())
print(df_best[df_best['test_roc_auc_lstm'] < df_best['test_roc_auc_log']]['stock'].values)
print(df_best[df_best['test_matthews_lstm'] < df_best['test_matthews_log']]['stock'].values)
columns = ['stock'] + [c for c in df_best.columns if 'matthews' in c]
df_best[columns + ['arch']]
for i, row in df_best.iterrows():
m = model_from_json(row['arch'])
from keras.utils import plot_model
st = row['stock']
r = row['r']
s = row['s']
if if_should_savefig:
plot_model(m, show_layer_names=True, show_shapes=True, to_file=f'plot_model/model_{st}_r{r}_s{s}.png')
df_best[['r', 's', 'matthews_lstm', 'test_matthews_lstm', 'test_matthews_log', 'stock', 'filename']]
x = np.linspace(-10, 10, 100)
plt.arrow(dx=0, dy=2.1, y=-1, x=0, length_includes_head=True, head_width=0.05, color='black')
plt.arrow(dx=16, dy=0, y=0, x=-8, length_includes_head=True, head_width=0.05, color='black')
plt.plot(x, 1 / (1 + np.exp(-x)))
plt.xlim(-8, 8.1)
plt.ylim(-0.1, 1.1)
plt.title('Sigmoid function')
plt.tight_layout()
plt.savefig('sigmoid.png')
###Output
_____no_output_____
|
Day5_Batch7.ipynb
|
###Markdown
Question 1
###Code
check_list=[1,1,5,7,9,6,4]
sub_list=[1,1,5]
print("original list : "+str(check_list))
print("original sublist : "+str(sub_list))
flag=0
if(set(sub_list).issubset(set(check_list))):
flag=1
if(flag):
print("it's a match")
else:
print("it's gone")
###Output
original list : [1, 1, 5, 7, 9, 6, 4]
original sublist : [1, 1, 5]
it's a match
###Markdown
Question 2
###Code
def is_prime(n):
while n>1:
if (n==1):
return False
elif (n==2):
return True;
else:
for x in range(2,n):
if(n % x==0):
return False
return True
fltrObj=filter(is_prime, range(2500))
print ('Prime numbers between 1-2500:', list(fltrObj))
###Output
Prime numbers between 1-2500: [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661, 673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823, 827, 829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991, 997, 1009, 1013, 1019, 1021, 1031, 1033, 1039, 1049, 1051, 1061, 1063, 1069, 1087, 1091, 1093, 1097, 1103, 1109, 1117, 1123, 1129, 1151, 1153, 1163, 1171, 1181, 1187, 1193, 1201, 1213, 1217, 1223, 1229, 1231, 1237, 1249, 1259, 1277, 1279, 1283, 1289, 1291, 1297, 1301, 1303, 1307, 1319, 1321, 1327, 1361, 1367, 1373, 1381, 1399, 1409, 1423, 1427, 1429, 1433, 1439, 1447, 1451, 1453, 1459, 1471, 1481, 1483, 1487, 1489, 1493, 1499, 1511, 1523, 1531, 1543, 1549, 1553, 1559, 1567, 1571, 1579, 1583, 1597, 1601, 1607, 1609, 1613, 1619, 1621, 1627, 1637, 1657, 1663, 1667, 1669, 1693, 1697, 1699, 1709, 1721, 1723, 1733, 1741, 1747, 1753, 1759, 1777, 1783, 1787, 1789, 1801, 1811, 1823, 1831, 1847, 1861, 1867, 1871, 1873, 1877, 1879, 1889, 1901, 1907, 1913, 1931, 1933, 1949, 1951, 1973, 1979, 1987, 1993, 1997, 1999, 2003, 2011, 2017, 2027, 2029, 2039, 2053, 2063, 2069, 2081, 2083, 2087, 2089, 2099, 2111, 2113, 2129, 2131, 2137, 2141, 2143, 2153, 2161, 2179, 2203, 2207, 2213, 2221, 2237, 2239, 2243, 2251, 2267, 2269, 2273, 2281, 2287, 2293, 2297, 2309, 2311, 2333, 2339, 2341, 2347, 2351, 2357, 2371, 2377, 2381, 2383, 2389, 2393, 2399, 2411, 2417, 2423, 2437, 2441, 2447, 2459, 2467, 2473, 2477]
###Markdown
Question 3
###Code
lst=("hi this is Onkar, I am from Pune ")
lst_num=map(lambda x:x.upper(),lst)
print(list(lst_num))
###Output
['H', 'I', ' ', 'T', 'H', 'I', 'S', ' ', 'I', 'S', ' ', 'O', 'N', 'K', 'A', 'R', ',', ' ', 'I', ' ', 'A', 'M', ' ', 'F', 'R', 'O', 'M', ' ', 'P', 'U', 'N', 'E', ' ']
|
docs/quick_start/demo/op2_pandas_unstack.ipynb
|
###Markdown
Manipulating the Pandas DataFrameThe Jupyter notebook for this demo can be found in: - docs\quick_start\demo\op2_pandas_unstack.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_pandas_unstack.ipynb This example will use pandas unstackThe unstack method on a DataFrame moves on index level from rows to columns. First let's read in some data:
###Code
import os
import pyNastran
pkg_path = pyNastran.__path__[0]
from pyNastran.op2.op2 import read_op2
import pandas as pd
pd.set_option('precision', 2)
op2_filename = os.path.join(pkg_path, '..', 'models', 'iSat', 'iSat_launch_100Hz.op2')
from pyNastran.op2.op2 import read_op2
isat = read_op2(op2_filename, build_dataframe=True, debug=False, skip_undefined_matrices=True)
cbar = isat.cbar_force[1].data_frame
cbar.head()
###Output
_____no_output_____
###Markdown
First I'm going to pull out a small subset to work with
###Code
csub = cbar.loc[3323:3324,1:2]
csub
###Output
_____no_output_____
###Markdown
I happen to like the way that's organized, but let's say that I want the have the item descriptions in columns and the mode ID's and element numbers in rows. To do that, I'll first move the element ID's up to the columns using a .unstack(level=0) and the transpose the result:
###Code
csub.unstack(level=0).T
###Output
_____no_output_____
###Markdown
unstack requires unique row indices so I can't work with CQUAD4 stresses as they're currently output, but I'll work with CHEXA stresses. Let's pull out the first two elements and first two modes:
###Code
chs = isat.chexa_stress[1].data_frame.loc[3684:3685,1:2]
chs
###Output
_____no_output_____
###Markdown
Now I want to put ElementID and the Node ID in the rows along with the Load ID, and have the items in the columns:
###Code
cht = chs.unstack(level=[0,1]).T
cht
###Output
_____no_output_____
###Markdown
Maybe I'd like my rows organized with the modes on the inside. I can do that by swapping levels: We actually need to get rid of the extra rows using dropna():
###Code
cht = cht.dropna()
cht
# mode, eigr, freq, rad, eids, nids # initial
# nids, eids, eigr, freq, rad, mode # final
cht.swaplevel(0,4).swaplevel(1,5).swaplevel(2,5).swaplevel(4, 5)
###Output
_____no_output_____
###Markdown
Alternatively I can do that by first using reset_index to move all the index columns into data, and then using set_index to define the order of columns I want as my index:
###Code
cht.reset_index().set_index(['ElementID','NodeID','Mode','Freq']).sort_index()
###Output
_____no_output_____
###Markdown
Manipulating the Pandas DataFrameThe Jupyter notebook for this demo can be found in: - docs/quick_start/demo/op2_pandas_unstack.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_pandas_unstack.ipynb This example will use pandas unstackThe unstack method on a DataFrame moves on index level from rows to columns. First let's read in some data:
###Code
import os
import pyNastran
pkg_path = pyNastran.__path__[0]
from pyNastran.op2.op2 import read_op2
import pandas as pd
pd.set_option('precision', 2)
op2_filename = os.path.join(pkg_path, '..', 'models', 'iSat', 'iSat_launch_100Hz.op2')
from pyNastran.op2.op2 import read_op2
isat = read_op2(op2_filename, build_dataframe=True, debug=False, skip_undefined_matrices=True)
cbar = isat.cbar_force[1].data_frame
cbar.head()
###Output
_____no_output_____
###Markdown
First I'm going to pull out a small subset to work with
###Code
csub = cbar.loc[3323:3324,1:2]
csub
###Output
_____no_output_____
###Markdown
I happen to like the way that's organized, but let's say that I want the have the item descriptions in columns and the mode ID's and element numbers in rows. To do that, I'll first move the element ID's up to the columns using a .unstack(level=0) and the transpose the result:
###Code
csub.unstack(level=0).T
###Output
_____no_output_____
###Markdown
unstack requires unique row indices so I can't work with CQUAD4 stresses as they're currently output, but I'll work with CHEXA stresses. Let's pull out the first two elements and first two modes:
###Code
chs = isat.chexa_stress[1].data_frame.loc[3684:3685,1:2]
chs
###Output
_____no_output_____
###Markdown
Now I want to put ElementID and the Node ID in the rows along with the Load ID, and have the items in the columns:
###Code
cht = chs.unstack(level=[0,1]).T
cht
###Output
_____no_output_____
###Markdown
Maybe I'd like my rows organized with the modes on the inside. I can do that by swapping levels: We actually need to get rid of the extra rows using dropna():
###Code
cht = cht.dropna()
cht
# mode, eigr, freq, rad, eids, nids # initial
# nids, eids, eigr, freq, rad, mode # final
cht.swaplevel(0,4).swaplevel(1,5).swaplevel(2,5).swaplevel(4, 5)
###Output
_____no_output_____
###Markdown
Alternatively I can do that by first using reset_index to move all the index columns into data, and then using set_index to define the order of columns I want as my index:
###Code
cht.reset_index().set_index(['ElementID','NodeID','Mode','Freq']).sort_index()
###Output
_____no_output_____
###Markdown
Manipulating the Pandas DataFrameThe Jupyter notebook for this demo can be found in: - docs/quick_start/demo/op2_pandas_unstack.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_pandas_unstack.ipynb This example will use pandas unstackThe unstack method on a DataFrame moves on index level from rows to columns. First let's read in some data:
###Code
import os
import pyNastran
pkg_path = pyNastran.__path__[0]
from pyNastran.op2.op2 import read_op2
import pandas as pd
pd.set_option('precision', 2)
op2_filename = os.path.join(pkg_path, '..', 'models', 'iSat', 'iSat_launch_100Hz.op2')
from pyNastran.op2.op2 import read_op2
isat = read_op2(op2_filename, build_dataframe=True, debug=False, skip_undefined_matrices=True)
cbar = isat.cbar_force[1].data_frame
cbar.head()
###Output
_____no_output_____
###Markdown
First I'm going to pull out a small subset to work with
###Code
csub = cbar.loc[3323:3324,1:2]
csub
###Output
_____no_output_____
###Markdown
I happen to like the way that's organized, but let's say that I want the have the item descriptions in columns and the mode ID's and element numbers in rows. To do that, I'll first move the element ID's up to the columns using a .unstack(level=0) and the transpose the result:
###Code
csub.unstack(level=0).T
###Output
_____no_output_____
###Markdown
unstack requires unique row indices so I can't work with CQUAD4 stresses as they're currently output, but I'll work with CHEXA stresses. Let's pull out the first two elements and first two modes:
###Code
chs = isat.chexa_stress[1].data_frame.loc[3684:3685,1:2]
chs
###Output
_____no_output_____
###Markdown
Now I want to put ElementID and the Node ID in the rows along with the Load ID, and have the items in the columns:
###Code
cht = chs.unstack(level=[0,1]).T
cht
###Output
_____no_output_____
###Markdown
Maybe I'd like my rows organized with the modes on the inside. I can do that by swapping levels: We actually need to get rid of the extra rows using dropna():
###Code
cht = cht.dropna()
cht
# mode, eigr, freq, rad, eids, nids # initial
# nids, eids, eigr, freq, rad, mode # final
cht.swaplevel(0,4).swaplevel(1,5).swaplevel(2,5).swaplevel(4, 5)
###Output
_____no_output_____
###Markdown
Alternatively I can do that by first using reset_index to move all the index columns into data, and then using set_index to define the order of columns I want as my index:
###Code
cht.reset_index().set_index(['ElementID','NodeID','Mode','Freq']).sort_index()
###Output
_____no_output_____
###Markdown
Manipulating the Pandas DataFrameThe Jupyter notebook for this demo can be found in: - docs/quick_start/demo/op2_pandas_unstack.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_pandas_unstack.ipynb This example will use pandas unstackThe unstack method on a DataFrame moves on index level from rows to columns. First let's read in some data:
###Code
import os
import pyNastran
pkg_path = pyNastran.__path__[0]
from pyNastran.op2.op2 import read_op2
import pandas as pd
pd.set_option('precision', 2)
op2_filename = os.path.join(pkg_path, '..', 'models', 'iSat', 'iSat_launch_100Hz.op2')
from pyNastran.op2.op2 import read_op2
isat = read_op2(op2_filename, build_dataframe=True, debug=False, skip_undefined_matrices=True)
cbar = isat.cbar_force[1].data_frame
cbar.head()
###Output
_____no_output_____
###Markdown
First I'm going to pull out a small subset to work with
###Code
csub = cbar.loc[3323:3324,1:2]
csub
###Output
_____no_output_____
###Markdown
I happen to like the way that's organized, but let's say that I want the have the item descriptions in columns and the mode ID's and element numbers in rows. To do that, I'll first move the element ID's up to the columns using a .unstack(level=0) and the transpose the result:
###Code
csub.unstack(level=0).T
###Output
_____no_output_____
###Markdown
unstack requires unique row indices so I can't work with CQUAD4 stresses as they're currently output, but I'll work with CHEXA stresses. Let's pull out the first two elements and first two modes:
###Code
chs = isat.chexa_stress[1].data_frame.loc[3684:3685,1:2]
chs
###Output
_____no_output_____
###Markdown
Now I want to put ElementID and the Node ID in the rows along with the Load ID, and have the items in the columns:
###Code
cht = chs.unstack(level=[0,1]).T
cht
###Output
_____no_output_____
###Markdown
Maybe I'd like my rows organized with the modes on the inside. I can do that by swapping levels: We actually need to get rid of the extra rows using dropna():
###Code
cht = cht.dropna()
cht
# mode, eigr, freq, rad, eids, nids # initial
# nids, eids, eigr, freq, rad, mode # final
cht.swaplevel(0,4).swaplevel(1,5).swaplevel(2,5).swaplevel(4, 5)
###Output
_____no_output_____
###Markdown
Alternatively I can do that by first using reset_index to move all the index columns into data, and then using set_index to define the order of columns I want as my index:
###Code
cht.reset_index().set_index(['ElementID','NodeID','Mode','Freq']).sort_index()
###Output
_____no_output_____
|
stats/frequential_parameter_estimation.ipynb
|
###Markdown
Estimación frequentista de parametrosEstoy siguiendo este curso para aprender acerca de inferencia estadistica, Este video toca el tema desde un punto de vista frecuentista.- https://www.youtube.com/watch?v=4UJc0S8APm4Dado un parametro $\theta$ y se produce una variable aleatoria, para la cual nuestro estimador genera otra variable aleatoria $\Theta$ que es nuestro valor estimado. El estimador funciona tanto como con valores escalares como con vectores.El objetivo es construir un estimador con error ($\Theta$ - $\theta$) tan pequeno como sea posible.Un buen estimador cumple con las siguientes caracteristicas, el valor de $\theta$ es desconocido, - No tiene sesgos: El valor esperado del estimador $E[\Theta]$ se acerca a $\theta$- Es consistente: $\Theta_n$ --> $\theta$ - Tiene un mean square error pequeno: El describe 2 metodos para construir un estimador de un parametro:- Maximun likehood- Sample distribution (CLT theorem) Preguntas ???- Que tan pequeño debe ser el MSE? Que es pequeño en este contexto?
###Code
def check_bias(distribution_generator, estimator, N=1000):
results = []
for i in range(N):
results.append(estimator(distribution_generator()))
return np.mean(results)
def compute_samples_table(distribution_generator, estimator):
results = []
for i in [1, 10, 100, 1000, 10000]:
X = distribution_generator(i)
mu = estimator(X)
df = pd.DataFrame({'values': X})
df.loc[:, "estimate"] = mu
df.loc[:, "sample"] = i
results.append(df)
return pd.concat(results)
def estimators_samples_fixed_N(distribution_generator, estimator, N):
results = []
for i in [1, 10, 100, 1000, 10000]:
estimators = []
for j in range(i):
mu = estimator(distribution_generator(N))
estimators.append(mu)
df = pd.DataFrame({'values': estimators})
df.loc[:, "sample"] = i
results.append(df)
return pd.concat(results)
def estimators_samples_table(distribution_generator, estimator):
results = []
for i in [1, 10, 100, 1000, 10000]:
df = estimators_samples_fixed_N(distribution_generator,
estimator, i)
df.loc[:, "N"] = i
results.append(df)
return pd.concat(results)
###Output
_____no_output_____
###Markdown
Maximun likehood Sample distributionConstruir un estimador usando el teorema central del limite:- Verificar que el valor esperado del estimador se apróxime al valor del parametro.- Verificar que el valor del estimador mejora y consistente conforme incrementa el número de muestras.- Verificar el mean squre error, debería ser más pequeno entre mas datos por la ley de los grandes números. Sample mean distributionEl estimador se puede construir con el promedio de los datos de la muestra: $\dfrac{\sum_{n=1}^{N} x_i} N$
###Code
def estimate_mean(X):
return np.mean(X)
###Output
_____no_output_____
###Markdown
Normal distributionUsar el paquete de numpy para generar muestras de una distribución normal, construir el estimador para la media y verificar los resultados.
###Code
MU = 10
SIGMA = 1
def gen_normal_sample(N, mu=MU, sigma=SIGMA):
return np.random.normal(mu, sigma, N)
###Output
_____no_output_____
###Markdown
Normal distribution: Sesgo
###Code
MSG = """
El estimador no presenta es menos volatil cuando incrementa el número de muestras
"""
samples = list(range(1, 10000, 100))
expected_values = list(map(lambda e: check_bias(
partial(gen_normal_sample, 10), estimate_mean, N=e), samples))
plt.plot(samples, expected_values, 'o')
MSG = """
Hacer un box plot con la distribución de probabilidad de la muestra con estos tamanos:
1, 10, 100, 1000, 10000, 100000.
El resultado tiene sentido entre más puntos las distribución tiene menos
sesgo.
"""
df = compute_samples_table(gen_normal_sample, estimate_mean)
sns.boxplot(data=df, y="values", x="sample", showfliers=True)
###Output
_____no_output_____
###Markdown
Normal distribution: Consistencia
###Code
MSG = """
El estimado debería acercarcse a teta, pero no lo esta haciendo.
Algo estoy haciendo mal, el dice que el estimador tiene a theta en proababilidad,
que quiere decir?
"""
samples = list(range(1, 10000, 100))
expected_values = list(map(lambda e:
estimate_mean(gen_normal_sample(e)), samples))
plt.plot(samples, expected_values, 'o')
MSG = """
Hacer un box plot con la distribución de probabilidad del estimador, la idea es
generar k muestras de tamano n, graficar la distribución de estas k muestras,
entre más grande el número k menos varianza debería tener la distribución.
El estimador es consistente, aun con muestras pequenas la distribución
del estimador tiende hacia el parametro.
"""
df_e = estimators_samples_table(gen_normal_sample, estimate_mean)
fig = plt.figure(figsize=(16, 10))
ax = fig.add_subplot(1, 1, 1)
sns.boxplot(ax=ax, data=df_e, y="values", x="sample", hue="N", showfliers=False)
###Output
_____no_output_____
###Markdown
Normal distribution: Error
###Code
MSG = """
Lo que yo esperaria es que el mean square error del parametro disminuya conforme incrementa el número de muestras.
No siempre es el menor sin embargo el valor es más constante entre más muestras, no cambia tanto entre ejecuciones.
"""
df_e = estimators_samples_table(gen_normal_sample, estimate_mean)
df_e.loc[:,"parameter"] = MU
df_e.loc[:,"square_error"] = (df_e["values"] - df_e["parameter"]).apply(lambda e: e**2)
df_e.head()
df_e[["sample", "square_error"]].groupby(by="sample").describe()
###Output
_____no_output_____
###Markdown
Uniform distribution
###Code
LOW = 10
MAX = 1
MU = (MAX + LOW)/2
def gen_uniform_sample(N, lower=LOW, upper=MAX):
return np.random.uniform(lower, upper, N)
###Output
_____no_output_____
###Markdown
Uniform distribution: Sesgo
###Code
MSG = """
El estimador es menos volatil cuando incrementa el número de muestras y no presenta ningún sesgo
"""
samples = list(range(1, 10000, 100))
expected_values = list(map(lambda e: check_bias(
partial(gen_uniform_sample, 10), estimate_mean, N=e), samples))
plt.plot(samples, expected_values, 'o')
MSG = """
Hacer un box plot con la distribución de probabilidad de la muestra con estos tamanos:
1, 10, 100, 1000, 10000, 100000.
El estimador no presenta ningún sesgo en ocasiones su valor es inferior y en otras superior al paramtero
"""
df = compute_samples_table(gen_uniform_sample, estimate_mean)
sns.boxplot(data=df, y="values", x="sample", showfliers=True)
###Output
_____no_output_____
###Markdown
Uniform distribution: Consistencia
###Code
MSG = """
El estimador mejora de forma considerable conforme incrementa el número de muestras, el parametro esta bn estimado
aunque la varianza incrementa cuando el tamaño de las muestras es muy pequeño.
"""
df_e = estimators_samples_table(gen_uniform_sample, estimate_mean)
fig = plt.figure(figsize=(16, 10))
ax = fig.add_subplot(1, 1, 1)
sns.boxplot(ax=ax, data=df_e, y="values", x="sample", hue="N", showfliers=True)
###Output
_____no_output_____
###Markdown
Uniform distribution: Error
###Code
MSG = """
Lo que yo esperaria es que el mean square error del parametro disminuya conforme incrementa el número de muestras.
No siempre es el menor sin embargo el valor es más constante entre más muestras, no cambia tanto entre ejecuciones.
"""
df_e = estimators_samples_table(gen_uniform_sample, estimate_mean)
df_e.loc[:,"parameter"] = MU
df_e.loc[:,"square_error"] = (df_e["values"] - df_e["parameter"]).apply(lambda e: e**2)
df_e.head()
df_e[["sample", "square_error"]].groupby(by="sample").describe()
###Output
_____no_output_____
###Markdown
Exponential distribution
###Code
LAMBDA = 1.0
MU = 1/LAMBDA
def gen_exponential_sample(N, lbda=LAMBDA):
return np.random.exponential(lbda, N)
###Output
_____no_output_____
###Markdown
Exponential distribution: Sesgo
###Code
MSG = """
El estimador es menos volatil cuando incrementa el número de muestras y no presenta ningún sesgo
"""
samples = list(range(1, 10000, 100))
expected_values = list(map(lambda e: check_bias(
partial(gen_exponential_sample, 10), estimate_mean, N=e), samples))
plt.plot(samples, expected_values, 'o')
MSG = """
Hacer un box plot con la distribución de probabilidad de la muestra con estos tamanos:
1, 10, 100, 1000, 10000, 100000.
Me da la impresión que el estimador tienen un sesgo positivo
"""
df = compute_samples_table(gen_exponential_sample, estimate_mean)
sns.boxplot(data=df, y="values", x="sample", showfliers=True)
###Output
_____no_output_____
###Markdown
Exponential distribution: Consistencia
###Code
MSG = """
El estimador no funciona bn con muestras de tamaño muy pequeño, aún cuando incrementa el número de muestras el valor
estimado del parametro se aleja del valor real, con muestras de tamaño 100 funciona bn aún cuando el número de muestras
es pequeño.
"""
df_e = estimators_samples_table(gen_exponential_sample, estimate_mean)
fig = plt.figure(figsize=(16, 10))
ax = fig.add_subplot(1, 1, 1)
sns.boxplot(ax=ax, data=df_e, y="values", x="sample", hue="N", showfliers=True)
###Output
_____no_output_____
###Markdown
Exponential distribution: Error
###Code
MSG = """
Aunque puedo ver que hay un sesgo el error es tan pequeño como en el caso de la distribución uniforme.
"""
df_e = estimators_samples_table(gen_exponential_sample, estimate_mean)
df_e.loc[:,"parameter"] = MU
df_e.loc[:,"square_error"] = (df_e["values"] - df_e["parameter"]).apply(lambda e: e**2)
df_e.head()
df_e[["sample", "square_error"]].groupby(by="sample").describe()
###Output
_____no_output_____
|
notebooks/center_smoother-sd.ipynb
|
###Markdown
The networks can take awhile to download and process from OSM, but we have pre-built networks for every metro in the country stored in our quilt bucket
###Code
if not os.path.exists("../data/41740.h5"):
p = quilt3.Package.browse('osm/metro_networks_8k', 's3://spatial-ucr')
p['41740.h5'].fetch("../data/")
sd = Community.from_lodes(msa_fips='41740')
gdf = sd.gdf
gdf.columns
gdf.dropna(subset=['total_employees']).plot(column='total_employees', scheme='quantiles', k=6, cmap='YlOrBr')
net = pdna.Network.from_hdf5("../data/41740.h5")
# change this number up to 5000 (upper limit)
#net.precompute(2000)
net.precompute(3000)
#net.precompute(4000)
#net.precompute(5000)
###Output
_____no_output_____
###Markdown
Here we're doing a KNN to get the intersection node nearest to each block centroid
###Code
gdf.plot()
gdf["node_ids"] = net.get_node_ids(gdf.centroid.x,
gdf.centroid.y)
gdf
###Output
_____no_output_____
###Markdown
Then, create a new veraiable on the network (total employees) located on the nodes we just identified, with values equal to total_employees
###Code
net.set(gdf.node_ids, variable=gdf["total_employees"], name="total_employees")
###Output
_____no_output_____
###Markdown
Now calculate shortest distance between every node in the network and add up all the jobs accessible within 2km. This will give back a series for every node on the network. Using this series, we can move up or down levels of the hierarchy by taking the nearest intersection node to any polygon
###Code
#access = net.aggregate(2000, type="sum", name="total_employees")
#access = net.aggregate(2500, type="sum", name="total_employees")
#access = net.aggregate(3000, type="sum", name="total_employees")
#access = net.aggregate(3500, type="sum", name="total_employees")
access = net.aggregate(4000, type="sum", name="total_employees")
#try 3.5km, 3km
access
access.name ='emp'
gdf = gdf.merge(access, left_on='node_ids', right_index=True)
gdf.plot(column='emp', scheme='quantiles', k=6)
gdf['id']= gdf.geoid.str[:11]
gdf.id
gdf = gdf.dissolve('id', aggfunc='sum')
tgdf = gdf
tgdf.plot()
###Output
_____no_output_____
###Markdown
now we'll grab the nearest intersection node for each tract and plot *tract*-level access
###Code
tgdf["node_ids"] = net.get_node_ids(tgdf.centroid.x, tgdf.centroid.y)
tgdf=tgdf.merge(access, left_on='node_ids', right_index=True)
tgdf.plot('emp_y', scheme="quantiles", k=5)
###Output
_____no_output_____
###Markdown
The idea then would be to identify employment centers at some density cutoff (e.g. everything in yellow), then drop out anything that doesnt meet the total employment threshold e.g. do something like- select all tracts where access>= `density_threshold`- dissolve tract boundaries to give you contiguous employment center polys- select all emp centers where total employment >= `total_threshold`
###Code
centers = tgdf[tgdf.emp_y >=10000]
#change this number
###Output
_____no_output_____
###Markdown
Here are our employment centers in Baltimore (based on the accessibility threshold alone)
###Code
centers.plot()
###Output
_____no_output_____
###Markdown
I dont think geopandas has a generic dissolve that groups contiguous objects... you have to supply a grouping column, so we need to create one. Simple with a `W`
###Code
from libpysal.weights import Queen
w = Queen.from_dataframe(centers)
w.component_labels
centers['labels'] = w.component_labels
centers=centers.dissolve(aggfunc='sum', by='labels')
centers.plot('emp_y', scheme='quantiles', k=8)
centers.emp_y
centers.to_file("../data/sdcenter_4km1k.json", driver="GeoJSON")
###Output
_____no_output_____
|
Lab 16: Scatter Plots.ipynb
|
###Markdown
**Data Visualization Lab** Estimated time needed: **45 to 60** minutes In this assignment you will be focusing on the visualization of data.The data set will be presented to you in the form of a RDBMS.You will have to use SQL queries to extract the data. Objectives In this lab you will perform the following: * Visualize the distribution of data.* Visualize the relationship between two features.* Visualize composition of data.* Visualize comparison of data. Demo: How to work with database Download database file.
###Code
!wget https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/LargeData/m4_survey_data.sqlite
###Output
--2021-11-18 16:52:55-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/LargeData/m4_survey_data.sqlite
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 198.23.119.245
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|198.23.119.245|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 36679680 (35M) [application/octet-stream]
Saving to: ‘m4_survey_data.sqlite.5’
m4_survey_data.sqli 100%[===================>] 34.98M 52.1MB/s in 0.7s
2021-11-18 16:52:56 (52.1 MB/s) - ‘m4_survey_data.sqlite.5’ saved [36679680/36679680]
###Markdown
Connect to the database.
###Code
import sqlite3
conn = sqlite3.connect("m4_survey_data.sqlite") # open a database connection
###Output
_____no_output_____
###Markdown
Import pandas module.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Demo: How to run an sql query
###Code
# print how many rows are there in the table named 'master'
QUERY = """
SELECT COUNT(*)
FROM master
"""
# the read_sql_query runs the sql query and returns the data as a dataframe
df = pd.read_sql_query(QUERY,conn)
df.head()
###Output
_____no_output_____
###Markdown
Demo: How to list all tables
###Code
# print all the tables names in the database
QUERY = """
SELECT name as Table_Name FROM
sqlite_master WHERE
type = 'table'
"""
# the read_sql_query runs the sql query and returns the data as a dataframe
pd.read_sql_query(QUERY,conn)
###Output
_____no_output_____
###Markdown
Demo: How to run a group by query
###Code
QUERY = """
SELECT Age,COUNT(*) as count
FROM master
group by age
order by age
"""
pd.read_sql_query(QUERY,conn)
###Output
_____no_output_____
###Markdown
Demo: How to describe a table
###Code
table_name = 'master' # the table you wish to describe
QUERY = """
SELECT sql FROM sqlite_master
WHERE name= '{}'
""".format(table_name)
df = pd.read_sql_query(QUERY,conn)
print(df.iat[0,0])
###Output
CREATE TABLE "master" (
"index" INTEGER,
"Respondent" INTEGER,
"MainBranch" TEXT,
"Hobbyist" TEXT,
"OpenSourcer" TEXT,
"OpenSource" TEXT,
"Employment" TEXT,
"Country" TEXT,
"Student" TEXT,
"EdLevel" TEXT,
"UndergradMajor" TEXT,
"OrgSize" TEXT,
"YearsCode" TEXT,
"Age1stCode" TEXT,
"YearsCodePro" TEXT,
"CareerSat" TEXT,
"JobSat" TEXT,
"MgrIdiot" TEXT,
"MgrMoney" TEXT,
"MgrWant" TEXT,
"JobSeek" TEXT,
"LastHireDate" TEXT,
"FizzBuzz" TEXT,
"ResumeUpdate" TEXT,
"CurrencySymbol" TEXT,
"CurrencyDesc" TEXT,
"CompTotal" REAL,
"CompFreq" TEXT,
"ConvertedComp" REAL,
"WorkWeekHrs" REAL,
"WorkRemote" TEXT,
"WorkLoc" TEXT,
"ImpSyn" TEXT,
"CodeRev" TEXT,
"CodeRevHrs" REAL,
"UnitTests" TEXT,
"PurchaseHow" TEXT,
"PurchaseWhat" TEXT,
"OpSys" TEXT,
"BlockchainOrg" TEXT,
"BlockchainIs" TEXT,
"BetterLife" TEXT,
"ITperson" TEXT,
"OffOn" TEXT,
"SocialMedia" TEXT,
"Extraversion" TEXT,
"ScreenName" TEXT,
"SOVisit1st" TEXT,
"SOVisitFreq" TEXT,
"SOFindAnswer" TEXT,
"SOTimeSaved" TEXT,
"SOHowMuchTime" TEXT,
"SOAccount" TEXT,
"SOPartFreq" TEXT,
"SOJobs" TEXT,
"EntTeams" TEXT,
"SOComm" TEXT,
"WelcomeChange" TEXT,
"Age" REAL,
"Trans" TEXT,
"Dependents" TEXT,
"SurveyLength" TEXT,
"SurveyEase" TEXT
)
###Markdown
Hands-on Lab
###Code
# print how many rows are there in the table named 'master'
QUERY = """
SELECT *
FROM master
"""
# the read_sql_query runs the sql query and returns the data as a dataframe
df = pd.read_sql_query(QUERY,conn)
df.ITperson.value_counts()
###Output
_____no_output_____
###Markdown
Visualizing distribution of data Histograms Plot a histogram of `ConvertedComp.`
###Code
# your code goes here
import seaborn as sns
sns.histplot(df.ConvertedComp)
###Output
_____no_output_____
###Markdown
Box Plots Plot a box plot of `Age.`
###Code
# your code goes here
sns.boxplot(df.Age)
###Output
/opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages/seaborn/_decorators.py:36: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
###Markdown
Visualizing relationships in data Scatter Plots Create a scatter plot of `Age` and `WorkWeekHrs.`
###Code
# your code goes here
sns.scatterplot(df.Age, df.WorkWeekHrs)
###Output
/opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages/seaborn/_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
###Markdown
Bubble Plots Create a bubble plot of `WorkWeekHrs` and `CodeRevHrs`, use `Age` column as bubble size.
###Code
# your code goes here
sns.scatterplot(df.WorkWeekHrs, df.CodeRevHrs, size = df.Age)
###Output
/opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages/seaborn/_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
###Markdown
Visualizing composition of data Pie Charts Create a pie chart of the top 5 databases that respondents wish to learn next year. Label the pie chart with database names. Display percentages of each database on the pie chart.
###Code
# print how many rows are there in the table named 'master'
QUERY = """
SELECT *
FROM LanguageDesireNextYear
"""
# the read_sql_query runs the sql query and returns the data as a dataframe
db = pd.read_sql_query(QUERY,conn)
db = db.LanguageDesireNextYear.value_counts()
#db = db.set_index('DatabaseDesireNextYear')
db = db.head()
db
# your code goes here
import matplotlib.pyplot as plt
db.plot.pie(subplots=True, figsize=(11, 6), autopct='%1.1f%%')
###Output
_____no_output_____
###Markdown
Stacked Charts Create a stacked chart of median `WorkWeekHrs` and `CodeRevHrs` for the age group 30 to 35.
###Code
# your code goes here
age_group = df[(df['Age']) > 29 & (df['Age'] < 36)]
age_group
s1 = sns.barplot(x = 'Age', y = 'WorkWeekHrs', data = age_group, color = 'red')
s2 = sns.barplot(x = 'Age', y = 'CodeRevHrs', data = age_group, color = 'blue')
###Output
_____no_output_____
###Markdown
Visualizing comparison of data Line Chart Plot the median `ConvertedComp` for all ages from 45 to 60.
###Code
# your code goes here
age_group = df[(df['Age'] > 24) & (df['Age'] < 31)]
sns.lineplot(x = 'Age', y = 'ConvertedComp', data = age_group)
###Output
_____no_output_____
###Markdown
Bar Chart Create a horizontal bar chart using column `MainBranch.`
###Code
# your code goes here
main = df.MainBranch.value_counts()
main
sns.barplot(data=main,
label="Total", color="b")
# print how many rows are there in the table named 'master'
QUERY = """
SELECT *
FROM DatabaseWorkedWith
"""
# the read_sql_query runs the sql query and returns the data as a dataframe
db = pd.read_sql_query(QUERY,conn)
db = db.DatabaseWorkedWith.value_counts()
db
# print how many rows are there in the table named 'master'
QUERY = """
SELECT *
FROM DevType
"""
# the read_sql_query runs the sql query and returns the data as a dataframe
db = pd.read_sql_query(QUERY,conn)
db = db.DevType.value_counts()
db
###Output
_____no_output_____
###Markdown
Close the database connection.
###Code
conn.close()
###Output
_____no_output_____
|
Distances_and_angles.ipynb
|
###Markdown
Distances and Angles between ImagesWe are going to compute distances and angles between images. Learning objectivesBy the end of this notebook, you will learn to 1. Write programs to compute distance.2. Write programs to compute angle."distance" and "angle" are useful beyond their usual interpretation. They are useful for describing __similarity__ between objects. You willfirst use the functions you wrote to compare MNIST digits. Furthermore, we will use these concepts for implementing the K Nearest Neighbors algorithm, which is a useful algorithm for classifying object according to distance.
###Code
# PACKAGE: DO NOT EDIT
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import scipy
import sklearn
from sklearn.datasets import fetch_mldata
from ipywidgets import interact
MNIST = fetch_mldata('MNIST original', data_home='./MNIST')
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
# GRADED FUNCTION: DO NOT EDIT
def distance(x0, x1):
"""Compute distance between two matrices x0, x1 using the dot product"""
x0=x0.astype(float)
x1=x1.astype(float)
x0=x0.ravel()
x1=x1.ravel()
y=x0-x1
#distance = np.sqrt(np.dot(y.T,y))
distance = np.sqrt((np.sum(y.T @ y)))# <-- EDIT THIS
return distance
def angle(x0, x1):
"""Compute the angle between two vectors x0, x1 using the dot product"""
x0=x0.astype(float)
x1=x1.astype(float)
x0=x0.ravel()
x1=x1.ravel()
normx0= (x0.T@ x0)
normx1= (x1.T@ x1)
dotx0x1=(x0.T@ x1)
angl=dotx0x1/np.sqrt((normx0*normx1))
#angle = np.arccos(np.dot(x0.T,x1)/(np.sqrt(np.dot(x0.T,x0)*np.dot(x1.T,x1)))) # <-- EDIT THIS
angle=np.arccos(angl)
return angle
def plot_vector(v, w):
fig = plt.figure(figsize=(4,4))
ax = fig.gca()
plt.xlim([-2, 2])
plt.ylim([-2, 2])
plt.grid()
ax.arrow(0, 0, v[0], v[1], head_width=0.05, head_length=0.1,
length_includes_head=True, linewidth=2, color='r');
ax.arrow(0, 0, w[0], w[1], head_width=0.05, head_length=0.1,
length_includes_head=True, linewidth=2, color='r');
# Some sanity checks, you may want to have more interesting test cases to test your implementation
a = np.array([1,0])
b = np.array([0,1])
np.testing.assert_almost_equal(distance(a, b), np.sqrt(2))
assert((angle(a,b) / (np.pi * 2) * 360.) == 90)
plot_vector(b, a)
plt.imshow(MNIST.data[MNIST.target==0].reshape(-1, 28, 28)[0], cmap='gray');
###Output
_____no_output_____
###Markdown
But we have the following questions:1. What does it mean for two digits in the MNIST dataset to be _different_ by our distance function? 2. Furthermore, how are different classes of digits different for MNIST digits? Let's find out! For the first question, we can see just how the distance between digits compare among all distances for the first 500 digits;
###Code
distances = []
for i in range(len(MNIST.data[:500])):
for j in range(len(MNIST.data[:500])):
distances.append(distance(MNIST.data[i], MNIST.data[j]))
@interact(first=(0, 499), second=(0, 499), continuous_update=False)
def show_img(first, second):
plt.figure(figsize=(8,4))
f = MNIST.data[first].reshape(28, 28)
s = MNIST.data[second].reshape(28, 28)
ax0 = plt.subplot2grid((2, 2), (0, 0))
ax1 = plt.subplot2grid((2, 2), (1, 0))
ax2 = plt.subplot2grid((2, 2), (0, 1), rowspan=2)
#plt.imshow(np.hstack([f,s]), cmap='gray')
ax0.imshow(f, cmap='gray')
ax1.imshow(s, cmap='gray')
ax2.hist(np.array(distances), bins=50)
d = distance(f, s)
ax2.axvline(x=d, ymin=0, ymax=40000, color='C4', linewidth=4)
ax2.text(0, 12000, "Distance is {:.2f}".format(d), size=12)
ax2.set(xlabel='distance', ylabel='number of images')
plt.show()
# GRADED FUNCTION: DO NOT EDIT
def most_similar_image():
"""Find the index of the digit, among all MNIST digits
that is the second-closest to the first image in the dataset (the first image is closest to itself trivially).
Your answer should be a single integer.
"""
ref = MNIST.data[0]
most_similar_index = 0
a=distance(MNIST.data[1],MNIST.data[0])
for i in range(1,len(MNIST.data[:500])):
b=distance(MNIST.data[i],MNIST.data[0])
if(b<a):
most_similar_index = i
a=b
# return np.argmin(result) # 60
return most_similar_index
result = most_similar_image()
result
###Output
_____no_output_____
###Markdown
For the second question, we can compute a `mean` image for each class of image, i.e. we compute mean image for digits of `1`, `2`, `3`,..., `9`, then we compute pairwise distance between them. We can organize the pairwise distances in a 2D plot, which would allow us to visualize the dissimilarity between images of different classes. First we compute the mean for digits of each class.
###Code
means = {}
for n in np.unique(MNIST.target).astype(np.int):
means[n] = np.mean(MNIST.data[MNIST.target==n], axis=0)
###Output
_____no_output_____
###Markdown
For each pair of classes, we compute the pairwise distance and store them into MD (mean distances). We store the angles between the mean digits in AG
###Code
MD = np.zeros((10, 10))
AG = np.zeros((10, 10))
for i in means.keys():
for j in means.keys():
MD[i, j] = distance(means[i], means[j])
AG[i, j] = angle(means[i].ravel(), means[j].ravel())
###Output
_____no_output_____
###Markdown
Now we can visualize the distances! Here we put the pairwise distances. The colorbarshows how the distances map to color intensity.
###Code
fig, ax = plt.subplots()
grid = ax.imshow(MD, interpolation='nearest')
ax.set(title='Distances between different classes of digits',
xticks=range(10),
xlabel='class of digits',
ylabel='class of digits',
yticks=range(10))
fig.colorbar(grid)
plt.show()
###Output
_____no_output_____
###Markdown
Similarly for the angles.
###Code
fig, ax = plt.subplots()
grid = ax.imshow(AG, interpolation='nearest')
ax.set(title='Angles between different classes of digits',
xticks=range(10),
xlabel='class of digits',
ylabel='class of digits',
yticks=range(10))
fig.colorbar(grid)
plt.show();
###Output
_____no_output_____
###Markdown
K Nearest NeighborsIn this section, we will explore the [KNN classification algorithm](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm).A classification algorithm takes input some data and use the data to determine which class (category) this piece of data belongs to.As a motivating example, consider the [iris flower dataset](https://archive.ics.uci.edu/ml/datasets/iris). The dataset consistsof 150 data points where each data point is a feature vector $\boldsymbol x \in \mathbb{R}^4$ describing the attribute of a flower in the dataset, the four dimensions represent 1. sepal length in cm 2. sepal width in cm 3. petal length in cm 4. petal width in cm and the corresponding target $y \in \mathbb{Z}$ describes the class of the flower. It uses the integers $0$, $1$ and $2$ to represent the 3 classes of flowers in this dataset.0. Iris Setosa1. Iris Versicolour 2. Iris Virginica
###Code
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
iris = datasets.load_iris()
print('data shape is {}'.format(iris.data.shape))
print('class shape is {}'.format(iris.target.shape))
###Output
data shape is (150, 4)
class shape is (150,)
###Markdown
For the simplicity of the exercise, we will only use the first 2 dimensions (sepal length and sepal width) of as features used to classify the flowers.
###Code
X = iris.data[:, :2] # use first two version for simplicity
y = iris.target
###Output
_____no_output_____
###Markdown
We create a scatter plot of the dataset below. The x and y axis represent the sepal length and sepal width of the dataset, and the color of the points represent the different classes of flowers.
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
iris = datasets.load_iris()
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
K = 3
x = X[-1]
fig, ax = plt.subplots(figsize=(4,4))
for i, iris_class in enumerate(['Iris Setosa', 'Iris Versicolour', 'Iris Virginica']):
idx = y==i
ax.scatter(X[idx,0], X[idx,1],
c=cmap_bold.colors[i], edgecolor='k',
s=20, label=iris_class);
ax.set(xlabel='sepal length (cm)', ylabel='sepal width (cm)')
ax.legend();
###Output
_____no_output_____
###Markdown
The idea behind a KNN classifier is pretty simple: Given a training set $\boldsymbol X \in \mathbb{R}^{N \times D}$ and $\boldsymbol y \in \mathbb{Z}^N$, we predict the label of a new point $\boldsymbol x \in \mathbb{R}^{D}$ __as the label of the majority of its "K nearest neighbor"__ (hence the name KNN) by some distance measure (e.g the Euclidean distance).Here, $N$ is the number of data points in the dataset, and $D$ is the dimensionality of the data.
###Code
# GRADED FUNCTION: DO NOT EDIT
def pairwise_distance_matrix(X, Y):
"""Compute the pairwise distance between rows of X and rows of Y
Arguments
----------
X: ndarray of size (N, D)
Y: ndarray of size (M, D)
Returns
--------
D: matrix of shape (N, M), each entry D[i,j] is the distance between
ith row of X and the jth row of Y (we use the dot product to compute the distance).
"""
N,D = X.shape
M, _ = Y.shape
distance_matrix = np.zeros((N, M)) # <-- EDIT THIS
for i in range(N):
for j in range(M):
distance_matrix[i,j] = distance(X[i,:], Y[j,:]) # <-- EDIT THIS
return distance_matrix
###Output
_____no_output_____
###Markdown
For `pairwise_distance_matrix`, you may be tempting to iterate throughrows of $\boldsymbol X$ and $\boldsymbol Y$ and fill in the distance matrix, but that is slow! Can youthink of some way to vectorize your computation (i.e. make it faster by using numpy/scipy operations only)?
###Code
# GRADED FUNCTION: DO NOT EDIT
def KNN(k, X, y, x):
"""K nearest neighbors
K: number of nearest neighbors
X: training input locations
y: training labels
x: test input
"""
N, D = X.shape
num_classes = len(np.unique(y))
dist = pairwise_distance_matrix(X, x) # <-- EDIT THIS np.zeros(N)
# Next we make the predictions
ypred = np.zeros(num_classes)
sorted_indx = np.argsort(dist, axis = 0)[:k]
classes = y[sorted_indx] # find the labels of the k nearest neighbors
for c in np.unique(classes):
ypred[c] = len(classes[classes==c])# <-- EDIT THIS 0
return np.argmax(ypred)
###Output
_____no_output_____
###Markdown
We can also visualize the "decision boundary" of the KNN classifier, which is the region of a problem space in which the output label of a classifier is ambiguous. This would help us develop an intuition of how KNN behaves in practice.
###Code
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
step = 0.1
xx, yy = np.meshgrid(np.arange(x_min, x_max, step),
np.arange(y_min, y_max, step))
ypred = []
for data in np.array([xx.ravel(), yy.ravel()]).T:
ypred.append(KNN(K, X, y, data.reshape(1,2)))
fig, ax = plt.subplots(figsize=(4,4))
ax.pcolormesh(xx, yy, np.array(ypred).reshape(xx.shape), cmap=cmap_light)
ax.scatter(X[:,0], X[:,1], c=y, cmap=cmap_bold, edgecolor='k', s=20);
###Output
_____no_output_____
|
ipynb_pt-br/02 - ML - NLP.ipynb
|
###Markdown
Aprendizado de Máquina Supervisionado - Exemplo 02 _Text Classification_ Técnica de _machine learning_ que atribui um conjunto de categorias pré-definidas a um texto aberto. Esse exemplo utiliza a biblioteca Python **Scikit-learn** (http://scikit-learn.org/).**Scikit-learn** é uma biblioteca _open-source_ de _machine learning_ para Python que oferece uma variedade de algoritimos de regressão, classificação e _clustering_. Objetivo:* Marcar automaticamente mensagens como _phishing_ por meio de seu conteúdo. Dados de entrada:**Origem:*** emails-enron-features.csv: E-mails comuns, sem _phishing_, obtidos do Enron Corpus;* emails-phishing-features.csv: E-mails de _phishing_.Ambos os arquivos foram criados com base em um _fork_ do projeto: https://github.com/diegoocampoh/MachineLearningPhishing **Features description:*** ID: (numerical) E-mail ID extraído do arquivo mbox;* Content Type: (object) [Content type](https://en.wikipedia.org/wiki/MIME) do e-mail;* Message: (object) Conteúdo do e-mail;**Label*** Phishy: (boolean) True se o e-mail for considerado _phishing_. Sumário:1. [Import e Load](p1)2. [Exploração de Dados](p2)3. [Prepare the Data Set](p3)4. [Treinamento e Teste](p4)5. [Exercício](p5) 1. Import e LoadO data set é uma concatenção de dois arquivos [CSV](https://en.wikipedia.org/wiki/Comma-separated_values) criados com base no [Enron e-mail corpus](https://www.cs.cmu.edu/~enron/) e uma [coletânea de e-mails de phishing](http://monkey.org/%7Ejose/wiki/doku.php?id=PhishingCorpus).
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn import metrics
# For Google Colab (upload files first)
df = pd.concat(map(pd.read_csv,
['/content/emails-enron.csv',
'/content/emails-phishing.csv']))
# For docker
#df = pd.concat(map(pd.read_csv,
# ['data/emails-enron.csv',
# 'data/emails-phishing.csv']))
df.head()
###Output
_____no_output_____
###Markdown
2. Exploração de DadosEtapa inicial da análise de dados, em que exploramos os dados de uma forma não estruturada a fim de descobrir padrões iniciais, características e pontos de interesse.
###Code
type(df)
df.shape
###Output
_____no_output_____
###Markdown
Verificar valores nulosValores nulos nos dados podem reduzir o poder estatístico de um estudo produzindo estimativas tendenciosas e levando a conclusões inválidas ([Why are missing values a problem?](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3668100/)).
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Verificar a coluna com os dados "de sáida", rótulo *Phishy*
###Code
df['Phishy'].unique()
df['Phishy'].value_counts()
###Output
_____no_output_____
###Markdown
Podemos observar que nesse caso específico temos um [data set balanceado](https://medium.com/analytics-vidhya/what-is-balance-and-imbalance-dataset-89e8d7f46bc5): 2000/4000 e-mails (50%) são rotulados como _Phishy_.Isso significa que o modelo de _machine learning_ que iremos criar precisa ter um desempenho **melhor que 50%** para ser melhor do que uma escolha aleatório. Verificar os tipos das colunas de _features_
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
"Saco de Palavras" (_Bag of Words_) e TF-IDF Uma **Bag of Words** é uma representação simplificada utilizada em Processamento Natural de Linguagem (_Natural Language Processing - NLP_), em que um texto (uma frase ou um documento) é representado como um "saco" (conjunto) de suas palavras, desconsiderando qualquer análise gramatical ou ordem das palavras, porém mantendo a multiplicidade. **Exemplo:**
###Code
# Frases
s1 = "Em um buraco no chão vivia um Hobbit"
s2 = "Não é sensato deixar um dragão fora dos teus cálculos se vives perto dele"
# Vocabulary
vocab = {}
i = 1
for word in s1.lower().split()+s2.lower().split():
if word in vocab:
continue
else:
vocab[word]=i
i+=1
print(vocab)
# Empty vectors with an index for each word in the vocabulary
s1_vector = ['s1']+[0]*len(vocab)
# Map the frequencies of each word to the vectors
for word in s1.lower().split():
s1_vector[vocab[word]]+=1
print(s1_vector)
# Empty vectors with an index for each word in the vocabulary
s2_vector = ['s2']+[0]*len(vocab)
# Map the frequencies of each word to the vectors
for word in s2.lower().split():
s2_vector[vocab[word]]+=1
print(s2_vector)
# Vectors comparison
print(f'{s1_vector}\n{s2_vector}')
###Output
['s1', 1, 2, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
['s2', 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
###Markdown
No exemplo acima, cada vetor pode ser considerado uma **bag of words**. Estendendo essa lógica para milhares de entradas, podemos perceber que o dicionário de vocabulário poderia crescer para centenas de milhares de palavras. Na prática, o modelo **bag of words** é usado principalmente como uma ferramenta de geração de _features_. Depois de transformar o texto em uma **bag of words**, é possível calcular várias métricas para caracterizar o texto. O tipo mais comum de métrica, ou _feature_, calculada a partir desse modelo é a frequência de termos, ou seja, número de vezes ue um termo aparece no texto. TF-IDF **TF-IDF**, abreviação de 'Term Frequency–Inverse Document Frequency', é uma estatística numérica destinada a refletir a importância de uma palavra para um documento.Considera basicamente frequências de termos (quantidade de ocorrências do termo / quantidade de termos na frase) e frequência inversa de frases (quantidade total de frases / quantidade de frases que contém o termo), o que diminui o peso dos termos que ocorrem com muita frequência no documento e aumenta o peso de termos que ocorrem raramente. Outras definições interessantes * **Stop words**: palavras irrelevantes, termos frequentes que podem ser ignorados no vocabulário (exe: 'a', 'isso', 'e', ...)* **Tokenization**: divisão de documentos em termos individuais (frequentemente utiliaz conceitos de morfologia)* **Word stems**: uso da raiz de uma palavrá (exe: ao invés do vocabulário conter ambas as palavras 'dragão' e 'dragões', poderia incluir somente 'dragão')* **Tagging**: adiciona mais dimensões aos _tokens_ (parte do discursos, dependências gramaticais, etc.) 3. Preparando o Data Set Definir colunas de _features_ e de rótulosColunas de **feature** são aquelas utiliadas para prever as colunas de **label** (rótulo). Dessa vez vamos utilizar o próprio texto como feature.Por **convenção**, _features_ são representadas como **X** (uppercase) e labels como **y** (lowercase).
###Code
from sklearn.model_selection import train_test_split
X = df['Message']
y = df['Phishy']
###Output
_____no_output_____
###Markdown
Dividir os dados em conjuntos de treinamento e testeAqui nos vamos atribuir 70% dos dados para a fase de treinamento e 30% para a fase de teste.Vamos utilizadr a variável de configuração `random_state` (semente de distribuição) para garantir a replicabilidade dos resultados.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42)
X_train.shape
###Output
_____no_output_____
###Markdown
CountVectorizerPré-processamento de texto, tokenização e a capacidade de filtrar _stopwords_ são funcionalidades já inclusas no módulo [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), que permite a construção de um dicionário de _features_ e transforma documentos em vetores dessas _features_, semelhante a um **bag of words**.
###Code
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_cv = count_vect.fit_transform(X_train)
X_train_cv.shape
###Output
_____no_output_____
###Markdown
Após utiliar o `CountVectorizer`, nosso conjunto de dados de treinamento agora possui 2800 e-mails com **82671** _features_.
###Code
# Extracted features
count_vect.get_feature_names_out()
###Output
_____no_output_____
###Markdown
Tfidf TransformerTF-IDF pode ser calculada utilizando o módulo do Scikit-learn's [TfidfTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html).
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_cv)
X_train_tfidf.shape
###Output
_____no_output_____
###Markdown
O método `fit_transform()` executa duas operations nesse caso: ajusta um estimador para os dados e transforma nossa matrix de palavras em (X_train_cv) uma representação tf-idf. TfidfVectorizerO módulo [TfidVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) combina os módulos `CountVectorizer` e `TfidTransformer` em um só.
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
X_train_tfidf = vectorizer.fit_transform(X_train)
X_train_tfidf.shape
###Output
_____no_output_____
###Markdown
4. Treinamento e TesteComo nosso conjunto de treinamento precisa ser vetoriazdo antes de ser processado pelo classificador, podemos usar um [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), uma classe que funciona como um classificador composto. Nesse exemplo, vamos utilizar um algoritimo de [Linear Support Vector Classification](https://scikit-learn.org/stable/modules/svm.html) devido a seu bom desempenho com entradas esparsas: ([LinearSVC](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html)).
###Code
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
# Pipeline
lsvc = Pipeline([
('tfidf', TfidfVectorizer()),
('clf', LinearSVC()),
])
# Train
lsvc = lsvc.fit(X_train, y_train)
# Create a prediction set
predictions = lsvc.predict(X_test)
# Print a confusion matrix
cm = metrics.confusion_matrix(y_test, predictions, labels=lsvc.classes_)
disp = metrics.ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=lsvc.classes_)
disp.plot()
# Classification report
print(metrics.classification_report(y_test,predictions))
# Print the overall accuracy
print(metrics.accuracy_score(y_test,predictions))
###Output
0.9941666666666666
###Markdown
5. Exercício 5.1. Teste novos dados com o modelo treinado para verificar a predição
###Code
test_email = "Dear user, \
Your e-mail quota is running out. \
Please follow the link below to fix the issue: \
http://www.mailquota.com?/exec/fix\
and update your account information."
print(lsvc.predict([test_email]))
###Output
[ True]
###Markdown
5.2. Melhore o modelo utilizando `stop_words`
###Code
stop_words = ...
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
# Pipeline
lsvc = Pipeline([
('tfidf', TfidfVectorizer(stop_words=stop_words)),
('clf', LinearSVC()),
])
# Train
...
# Create a prediction set
...
# Print a confusion matrix
...
# Classification report
...
# Print the overall accuracy
...
###Output
_____no_output_____
|
notebook/result_analysis/first_round_result_regression-Eva_on_ave.ipynb
|
###Markdown
This notebook implements regression on the first round results with cross validation. Pipeline- Data pre-processing: run codes/data_generating.py- Reading data: read data/firstRound_4h.csv into pandas dataframe- Cross validation: training (training, validating); testing, KFold (K = ?) - Emdedding - Onehot - Label - Kernel - RBF - DotProduct - Spectrum - lmer: l = ? - Padding_flag: add special characters before and after sequences, e.g. 'ACTGAA' -> 'ZZ' + 'ACTGAA' + 'ZZ' - gap_flag:add gapped features, e.g. 3-mer-1-gap - normalised_kernel: e.g. zero-mean, unit-norm, unit-var - Sum of Spectrum - a K_A + b K_B + c K_C, where a + b + c = 1 - Regression model - Gaussian process regression - alpha: scalar value add to diagonal - heteroscedastic: noises are learned as well (same as normalise each replicates to have same derivatives)- Evaluation - metric: e.g. Mean square error; R2 - true label: either sample or mean of sample. Splitting in terms of sequences vs samples?For each sequence, we have at least three biological replicates. There are two ways splitting data into training and testing data: splitting in terms of sequences, where if one sequence is split into the training dataset, then all replicates of that sequences belong to the training dataset; splitting in terms of samples, where the replicates of one sequence can be in the training or testing dataset. The two methods both make sense in terms of the evaluation of Gaussian process regression.Considering the goal is to design good sequences for the second round experiment, we expect for training, we do not have any information about the sequences in the testing dataset. That is, splitting in terms of sequences can better simulate the sequence design task. In this notebook, we show both two methods of splitting data. We expect that splitting in terms of samples provide a lower error in testing data since the model may have seen the sequences. The test error of splitting sequences should have higher variance, since the prediction depends on whether the sequences in test dataset similar to the sequences in the training dataset. But again, our goal is to decrease the test error as well as the variance for the case of splitting sequences. Evaluate on samples vs sample mean?For training, we use the samples as labels, since we model the label for each sequence as samples from a unknown (Gaussian) reward distribution.For testing, we use the sample mean for the sequence (i.e. mean value of the three replicates) as label. The ideal label should be the true mean of underlying distribution (we assume is Gaussian) of a sequence, however, we do not know the true mean. The only choice is to use the sample mean to approximate the true mean, which has the risk the sample mean would deviate a lot from the true mean since the number of samples is quite low (and variance of samples is not low). Choices to make within cross validation - alpha: scalar value add to diagonal - kernel- lmer: l = ?- For Sum_Spectrum_Kernel: b for (1 - b)/2 K_A + b K_B + (1-b)/2 K_C
###Code
# direct to proper path
import os
import sys
module_path = os.path.abspath(os.path.join('../..'))
if module_path not in sys.path:
sys.path.append(module_path)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
from collections import defaultdict
import math
import json
import xarray as xr
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import PairwiseKernel, DotProduct, RBF
from sklearn.kernel_ridge import KernelRidge
from sklearn.metrics import r2_score, mean_squared_error, make_scorer
from sklearn.model_selection import KFold
from codes.embedding import Embedding
from codes.environment import Rewards_env
from codes.ucb import GPUCB, Random
from codes.evaluations import evaluate, plot_eva
from codes.regression import *
from codes.kernels_for_GPK import Spectrum_Kernel, Sum_Spectrum_Kernel, WeightedDegree_Kernel
from ipywidgets import IntProgress
from IPython.display import display
import warnings
%matplotlib inline
kernel_dict = {
'Spectrum_Kernel': Spectrum_Kernel,
'Mixed_Spectrum_Kernel': Mixed_Spectrum_Kernel,
'WD_Kernel': WeightedDegree_Kernel,
'Sum_Spectrum_Kernel': Sum_Spectrum_Kernel,
'WD_Kernel_Shift': WD_Shift_Kernel
}
Path = '../../data/firstRound_4h_normTrue_formatSeq_logTrue.csv'
df = pd.read_csv(Path)
df.head(20)
#df = df[df['Group'] != 'bps'].reset_index()
df.shape
plt.hist(df['AVERAGE'])
###Output
_____no_output_____
###Markdown
Repeated KFold for sequences
###Code
num_split = 5
num_repeat = 10
s_list = [0,1,2]
kernel = 'WD_Kernel_Shift'
alpha_list= [1, 10, 20, 30, 40, 50, 60, 70, 80, 90,100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200]
l_lists = [[3], [4], [5], [6]]
result_DataArray = Repeated_kfold(df, num_split, num_repeat, kernel, alpha_list, embedding, eva_metric, eva_on_ave_flag,
l_lists, s_list)
import pickle
with open('repeated_kfold_wd_shift_logTrue.pickle', 'wb') as handle:
pickle.dump(result_DataArray, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('repeated_kfold_wd_shift_logTrue.pickle', 'rb') as handle:
result_pkl = pickle.load(handle)
result_pkl.loc[dict(train_test = 'Train')]
result_DataArray[1].mean(axis = -1).mean(axis = -1)
###Output
_____no_output_____
###Markdown
Double Loop Cross Validation
###Code
# setting
cv = 5
test_size = 0.2
random_state = 24
embedding = 'label'
eva_on_ave_flag = True # true label is the sample mean instead of individual samples, since the prediction is the posterior mean
eva_metric = mean_squared_error # mean square error returns a more stable optimal hyparameter choice than r2 score
kernel_list = ['Spectrum_Kernel',
#'Mixed_Spectrum_Kernel',
#'Sum_Spectrum_Kernel',
#'WD_Kernel',
#'WD_Kernel_Shift'
]
alpha_list = [0.1, 1, 2, 3, 5]
l_lists = [[3], [6]]
b_list = [0.34, 0.6, 0.8]
weight_flag = False
padding_flag = False
gap_flag = False
optimal_para, test_scores = cross_val(df, cv, random_state, test_size, kernel_list, alpha_list, embedding, eva_metric, eva_on_ave_flag,
l_lists, b_list, weight_flag, padding_flag, gap_flag)
###Output
_____no_output_____
###Markdown
Optimal Choices for splitting sequences case- optimal kernel: Spectrum_Kernel , - optimal alpha: 2 , - optiaml l list: [2, 3, 4, 5, 6] Cross Validation for samples
###Code
Path = '../../data/firstRound_4h_normTrue_formatSample.csv'
df_samples = pd.read_csv(Path)
df_samples.head()
df_samples_frr = df_samples[df_samples['Group'] != 'Baseline data'].reindex()
df_samples_frr.shape
optimal_para, test_scores = cross_val(df_samples, cv, random_state, test_size, kernel_list, alpha_list, embedding, eva_metric, eva_on_ave_flag,
l_lists, b_list, weight_flag, padding_flag, gap_flag)
optimal_para, test_scores = cross_val(df_samples_frr, cv, random_state, test_size, kernel_list, alpha_list, embedding, eva_metric, eva_on_ave_flag,
l_lists, b_list, weight_flag, padding_flag, gap_flag)
###Output
_____no_output_____
|
Functional Programming/Function Parameters/04 - Star-Args.ipynb
|
###Markdown
\*args Recall from iterable unpacking:
###Code
a, b, *c = 10, 20, 'a', 'b'
print(a, b)
print(c)
###Output
_____no_output_____
###Markdown
We can use a similar concept in function definitions to allow for arbitrary numbers of **positional** parameters/arguments:
###Code
def func1(a, b, *args):
print(a)
print(b)
print(args)
func1(1, 2, 'a', 'b')
###Output
1
2
('a', 'b')
###Markdown
A few things to note: 1. Unlike iterable unpacking, **\*args** will be a **tuple**, not a list.2. The name of the parameter **args** can be anything you prefer3. You cannot specify positional arguments **after** the **\*args** parameter - this does something different that we'll cover in the next lecture.
###Code
def func1(a, b, *my_vars):
print(a)
print(b)
print(my_vars)
func1(10, 20, 'a', 'b', 'c')
def func1(a, b, *c, d):
print(a)
print(b)
print(c)
print(d)
func1(10, 20, 'a', 'b', 100)
###Output
_____no_output_____
###Markdown
Let's see how we might use this to calculate the average of an arbitrary number of parameters.
###Code
def avg(*args):
count = len(args)
total = sum(args)
return total/count
avg(2, 2, 4, 4)
###Output
_____no_output_____
###Markdown
But watch what happens here:
###Code
avg()
###Output
_____no_output_____
###Markdown
The problem is that we passed zero arguments.We can fix this in one of two ways:
###Code
def avg(*args):
count = len(args)
total = sum(args)
if count == 0:
return 0
else:
return total/count
avg(2, 2, 4, 4)
avg()
###Output
_____no_output_____
###Markdown
But we may not want to allow specifying zero arguments, in which case we can split our parameters into a required (non-defaulted) positional argument, and the rest:
###Code
def avg(a, *args):
count = len(args) + 1
total = a + sum(args)
return total/count
avg(2, 2, 4, 4)
avg()
###Output
_____no_output_____
###Markdown
As you can see, an exception occurs if we do not specify at least one argument. Unpacking an iterable into positional arguments
###Code
def func1(a, b, c):
print(a)
print(b)
print(c)
l = [10, 20, 30]
func1(*l)
###Output
10
20
30
###Markdown
This will **not** work:
###Code
func1(l)
###Output
_____no_output_____
###Markdown
The function expects three positional arguments, but we only supplied a single one (albeit a list).But we could unpack the list, and **then** pass it to as the function arguments:
###Code
*l,
func1(*l)
###Output
_____no_output_____
###Markdown
What about mixing positional and keyword arguments with this?
###Code
def func1(a, b, c, *d):
print(a)
print(b)
print(c)
print(d)
func1(10, c=20, b=10, 'a', 'b')
###Output
_____no_output_____
|
step_by_step.ipynb
|
###Markdown
00 Introduction HTML If our aim is to scrape websites we first have to talk about HTML. Because, behind every web page is an HTML document. While we're not going to write any HTML in this course, we do have to know how to read it! If you're coming from a web development background, or if you've written some HTML, this little introduction will be a breeze! And If you have no idea what HTML is or what it looks like, don't sweat! We'll start at the the beginning... Fire up your favourite web browser (I like Firefox), and bring up [Google](www.google.com): Google is a great case study in HTML because it's famously minimal. To see the underlying HTML that renders the Google home page inside the browser, right click anywhere on the page and select `Inspect Element`: This will bring up the "Inspector": The Inspector connects each section of HTML code to each section of the displayed page. Hovering over a piece of code in the Inspector will highlight the linked element inside the browser. Boilerplate There are a lot of `` brackets in HTML. And the Google home page is no exception. The page is riddled with ``, `` and `` tags, each helping, in their own way, to structure and render the result that we see inside the browser. Though Google is (relatively) simple in HTML terms, there's a lot of code in the Inspector that deserves unpacking. We won't. Instead, let's take a couple of gigantic steps back to look at, and appreciate, the minimum amount of boilerplate HTML code required to render a (blank) page: ```html ``` A couple of things to note:1. The document type is declared at the top2. The entire page is wrapped in an `` tag3. Open tags (``) are eventually followed by close tags (``)4. The page is divided into two parts (`head` and `body`) Every HTML is pretty well segmented into two parts:- head: metadata and scripts and styling- body: actual content Here's a more complete page (still not very impressive):
###Code
with open('data/bad.html', 'r') as f:
html = f.read()
from IPython.display import HTML; HTML(html)
###Output
_____no_output_____
###Markdown
Looking at the raw html text we can see the "page" rendered with the following code:
###Code
print(html)
###Output
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>This is HTML</title>
</head>
<body>
<h1>This is HTML</h1>
<p>It's not the greatest...</p>
<div class='foo'>...but it is <i>functional</i>.</div>
<br />
<div>For good measure, here's some more of it!</div>
<p>And an image:</p>
<img src='https://invisiblebread.com/comics-firstpanel/2015-03-03-scrape.png' height='200' />
<p id='bar'>Isn't HTML great?!</p>
</body>
</html>
###Markdown
gazpacho Notice the various different tags in the "This is HTML" document. And now imagine that we want to extract information from this page. In order to get all of the `` tags, for instance, we'll use a tool called [gazpacho](https://github.com/maxhumber/gazpacho) that can be installed at the command line with:
###Code
!pip install gazpacho
###Output
_____no_output_____
###Markdown
The main part of gazpacho is the `Soup` wrapper which allows us to parse over HTML documents, it's imported accordingly:
###Code
from gazpacho import Soup
###Output
_____no_output_____
###Markdown
To enable parsing, first wrap the html string in a gazpacho `Soup` object:
###Code
soup = Soup(html)
###Output
_____no_output_____
###Markdown
And use the main `find` method on the tag you wish to target:
###Code
soup.find('p')
###Output
_____no_output_____
###Markdown
The `find` method, by default, will return a list if there is more than one element that shares that tag, or a soup object if there's just one. To isolate on specific tags, we can target tag attributes (`attrs`) in a Python dictionary. So, if we're interested in scraping this slice of html: `Isn't HTML great?!` We can run:
###Code
soup.find('p', attrs={'id': 'bar'})
###Output
_____no_output_____
###Markdown
To get the text inside the HTML, we can ask gazpacho to return the `.text` attribute:
###Code
soup.find('p', {'id': 'bar'}).text
###Output
_____no_output_____
###Markdown
And to find all the `div`s on the page we can do the same thing but with `div` as the first argument:
###Code
soup.find('div')
###Output
_____no_output_____
###Markdown
To get just the first `div` (and ignore the rest):
###Code
soup.find('div', mode='first')
###Output
_____no_output_____
###Markdown
And to isolate the `div` tags that have `class=foo`:
###Code
soup.find('div', {'class': 'foo'}).text
###Output
_____no_output_____
###Markdown
You can literally isolate any tag!
###Code
soup.find('i').text
###Output
_____no_output_____
###Markdown
But sometimes you want to just get rid of tags, so this is accomplished by calling:
###Code
soup.find('div', {'class': 'foo'}).remove_tags()
###Output
_____no_output_____
###Markdown
01 get HTML is the stuff of websites. Importing HTML documents from our computer is neither fun nor realistic! So let's "get" HTML from an actual website.To get, or download the HTML from a specific page we'll use the `get` function from gazpacho:
###Code
from gazpacho import get
###Output
_____no_output_____
###Markdown
Status CodesIf every is hunkydory `get` will just return the raw HTML. But if something is wrong it will raise an HTTP Status code. While everyone is familiar with 404 and maybe 503, here's a helpful list of some common codes that you might encounter in the wild. Most importantly, 400s are your fault and 500s are the website's fault: - 1xx Informational- 2xx Sucess - 200 - OK- 3xx Redirection- 4xx Client Error (a.k.a. **your fault**) - 400 - Bad Request - 401 - Unauthorized - 403 - Forbidden - 404 - Not Found - 418 - 🍵 - 429 - Too many requests- 5xx Server Error (a.k.a. **their fault**) - 500 - Internal Server Error - 501 - Not Implemented - 502 - Bad Gateway - 503 - Service Unavailable - 504 - Gateway Timeout Uncomment and run to see how gazpacho handles HTTP status codes:
###Code
# get('https://httpstat.us/403')
# get('https://httpstat.us/404')
# get('https://httpstat.us/418')
###Output
_____no_output_____
###Markdown
Structuring a `get` request Often we'll just need to point `get` at a URL. But sometimes, we'll need to manipulate the URL string to return specific information from a page. Here's a query string that perhaps searches for all cars with a year make of 2020 and a colour that equals black:
###Code
url = 'https://httpbin.org/anything?year=2020&colour=black'
get(url)
###Output
_____no_output_____
###Markdown
If instead we wanted red cars made in 2016 we could edit the string, or we could do something a little more Pythonic and use a params dictionary instead:
###Code
url = 'https://httpbin.org/anything'
r = get(
url,
params={'year': 2016, 'colour': 'red'},
headers={'User-Agent': 'gazpacho'}
)
r
###Output
_____no_output_____
###Markdown
02 Scrape World The `get` requests that we've been looking at are still somewhat artificial... I bet you just want to start scraping already! Me too! But there's a problem...Building a web scraping course is hard. Because by the time this is published it could be that all of the examples are out of date. And it wouldn't be my fault. The web is always changing! So, to solve this problem, I've created a Web Scraping Sandbox that replicates some familiar pages (that won't change) available at: www.scrape.worldIf, for some reason www.scrape.world is down ($$$) you can grab source code from the repo [here](https://github.com/maxhumber/scrape.world), spin up a local application and change all the base urls accordingly:
###Code
local = False
if local:
url = 'localhost:5000'
else:
url = "https://scrape.world"
###Output
_____no_output_____
###Markdown
In this first www.scrape.world example let's scrape all of the link tags in the `section-speech` part of the page:
###Code
from gazpacho import get, Soup
url = "https://scrape.world/soup"
html = get(url)
soup = Soup(html)
fos = soup.find("div", {"class": "section-speech"})
links = []
for a in fos.find("a"):
try:
link = a.attrs["href"]
links.append(link)
except AttributeError:
pass
links = [l for l in links if "wikipedia.org" in l]
links
###Output
_____no_output_____
###Markdown
03 Tables Here's how we might scrape the total spend for each team on this fictional Salary Cap page:
###Code
from gazpacho import get, Soup
url = "https://scrape.world/spend"
html = get(url)
soup = Soup(html)
trs = soup.find("tr", {"class": "tmx"})
def parse_tr(tr):
team = tr.find("td", {"data-label": "TEAM"}).text
spend = float(
tr.find("td", {"data-label": "TODAYS CAP HIT"}).text.replace(",", "")[1:]
)
return team, spend
spend = [parse_tr(tr) for tr in trs]
spend
###Output
_____no_output_____
###Markdown
04 Credentials Sometimes what you're looking for is locked behind a login page. So long as you have a user account for that website, we can use Selenium to fake out a browser, capture the rendered HTML, and use gazpacho as normal.To install Selenium run:
###Code
!pip install selenium
###Output
_____no_output_____
###Markdown
And follow the additional setup instructions [here](https://stackoverflow.com/a/42231328/3731467). Using credentials to log in using Selenium we can grab the data at the /season endpoint by running:
###Code
%%writefile credentials.py
from gazpacho import Soup
import pandas as pd
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.options import Options
url = "https://scrape.world/season"
options = Options()
options.headless = True
browser = Firefox(executable_path="/usr/local/bin/geckodriver", options=options)
browser.get(url)
# username
username = browser.find_element_by_id("username")
username.clear()
username.send_keys("admin")
# password
password = browser.find_element_by_name("password")
password.clear()
password.send_keys("admin")
# submit
browser.find_element_by_xpath("/html/body/div/div/form/div/input[3]").click()
# refetch page (just incase)
browser.get(url)
html = browser.page_source
soup = Soup(html)
tables = pd.read_html(browser.page_source)
east = tables[0]
west = tables[1]
df = pd.concat([east, west], axis=0)
df["W"] = df["W"].apply(pd.to_numeric, errors="coerce")
df = df.dropna(subset=["W"])
df = df[["Team", "W"]]
df = df.rename(columns={"Team": "team", "W": "wins"})
df = df.sort_values("wins", ascending=False)
print(df)
!python credentials.py
###Output
_____no_output_____
###Markdown
05 Interactions 1 Sometimes a website allows us to filter the data displayed on the page with dropdowns and search bars. To interact with dropdown and other page elements we can use Selenium as well:
###Code
%%writefile interactions1.py
import time
from gazpacho import Soup
import pandas as pd
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.ui import Select
url = "https://scrape.world/results"
options = Options()
options.headless = True
browser = Firefox(executable_path="/usr/local/bin/geckodriver", options=options)
browser.get(url)
# username
username = browser.find_element_by_id("username")
username.clear()
username.send_keys("admin")
time.sleep(0.5)
# password
password = browser.find_element_by_name("password")
password.clear()
password.send_keys("admin")
time.sleep(0.5)
# submit
browser.find_element_by_xpath("/html/body/div/div/form/div/input[3]").click()
time.sleep(0.5)
# refetch page (just incase)
browser.get(url)
search = browser.find_element_by_xpath("/html/body/div/div/div[2]/div[2]/label/input")
search.clear()
search.send_keys("toronto")
time.sleep(0.5)
drop_down = Select(
browser.find_element_by_xpath("/html/body/div/div/div[2]/div[1]/label/select")
)
drop_down.select_by_visible_text("100")
time.sleep(0.5)
html = browser.page_source
soup = Soup(html)
df = pd.read_html(str(soup.find("table")))[0]
print(df)
!python interactions1.py
###Output
_____no_output_____
###Markdown
06 Interactions 2 Piggybacking on the last example, here's how we might extract data that iteratively loads on scroll:
###Code
%%writefile interactions2.py
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.options import Options
from gazpacho import Soup
import pandas as pd
options = Options()
options.headless = True
browser = Firefox(executable_path="/usr/local/bin/geckodriver", options=options)
url = "https://scrape.world/population"
browser.get(url)
poplist = browser.find_element_by_id('infinite-list')
days = 365
n = 0
while n < days:
browser.execute_script(
'arguments[0].scrollTop = arguments[0].scrollHeight',
poplist
)
html = browser.page_source
soup = Soup(html)
lis = soup.find('ul', {'id': 'infinite-list'}).find('li')
n = len(lis)
def parse_li(li):
day, population = li.text.split(' Population ')
day = int(day.split('Day ')[-1])
population = float(population)
return day, population
population = [parse_li(li) for li in lis]
pd.DataFrame(population, columns=["day", "population_count"])
!python interactions2.py
###Output
_____no_output_____
###Markdown
07 Downloading Sometimes we don't want HTML, but instead to extract an image, a video, or an audio clip from a web page. Here's how we might do that:
###Code
from pathlib import Path
from shutil import rmtree as delete
from urllib.request import urlretrieve as download
from gazpacho import get, Soup
dir = "media"
Path(dir).mkdir(exist_ok=True)
base = "https://scrape.world"
url = base + "/books"
html = get(url)
soup = Soup(html)
# download images
imgs = soup.find("img")
srcs = [i.attrs["src"] for i in imgs]
for src in srcs:
name = src.split("/")[-1]
download(base + src, f"{dir}/{name}")
# download audio
audio = soup.find("audio").find("source").attrs["src"]
name = audio.split("/")[-1]
download(base + audio, f"{dir}/{name}")
# download video
video = soup.find("video").find("source").attrs["src"]
name = video.split("/")[-1]
download(base + video, f"{dir}/{name}")
# clean up
delete(dir)
###Output
_____no_output_____
###Markdown
08 Scheduling (Local) Everything up until this point has been (hopefully interesting, but nonetheless) table stakes. We want to take our scraping skills to the next level by building a modern web scraper that can run on a schedule. Imagine we want to point our scraper at a page to monitor prices and send us notifications for when a sale is happening. Here's how we'd start building:
###Code
%%writefile books.py
from gazpacho import get, Soup
import pandas as pd
def parse(book):
name = book.find("h4").text
price = float(book.find("p").text[1:].split(" ")[0])
return name, price
def fetch_books():
url = "https://scrape.world/books"
html = get(url)
soup = Soup(html)
books = soup.find("div", {"class": "book-"})
return [parse(book) for book in books]
data = fetch_books()
books = pd.DataFrame(data, columns=["title", "price"])
string = f"Current Prices:\n```\n{books.to_markdown(index=False, tablefmt='grid')}\n```"
print(string)
###Output
_____no_output_____
###Markdown
**Scheduling** In order to schedule this script to execute at some cadence we'll use [hickory](https://github.com/maxhumber/hickory) (`pip install hickory`):```hickory schedule books.py --every=30seconds```To check the status of a hickory script, run:```hickory status```And to kill a schedule:```hickory kill books.py``` **Slack over print()**To send results to Slack instead of printing to a log file we'll use [`slackclient`](https://github.com/slackapi/python-slackclient) the official Slack API for Python:```pythonpip install slackclient```In order to build a Slack Bot, we'll need a Slack API token, which will require us to do the following:1. Create a new Slack AppFollow this [link](https://api.slack.com/apps) to open up the Apps Portal and click *Create New App*2. Add permissionsIn the menu on the left, find *OAuth and Permissions*. Click it, and scroll down to the *Scopes* section. Click *Add an OAuth Scope*.Search for the *chat:write* and *chat:write.public* scopes, and add them. At this point, you can install the app to your workspace.3. Copy the token to a `.env` fileOn the same page you'll find your access token under the label *Bot User OAuth Access Token*. Copy this token, and save it to a `.env` fileIt should look like this:```SLACK_API_TOKEN=xoxb-000000000-000000000-a0a0a0a0a0a0a0a0a0a0a0a0``` Once you have a Slack API token we can now adjust the original Python script to send messages to a Slack Channel of our choosing:
###Code
%%writefile booksbot.py
import os
import sqlite3
from gazpacho import get, Soup
from dotenv import find_dotenv, load_dotenv # pip install python-dotenv
import pandas as pd
from slack import WebClient # pip install slackclient
load_dotenv(find_dotenv())
con = sqlite3.connect("data/books.db")
cur = con.cursor()
slack_token = os.environ["SLACK_API_TOKEN"]
client = WebClient(token=slack_token)
def parse(book):
name = book.find("h4").text
price = float(book.find("p").text[1:].split(" ")[0])
return name, price
def fetch_books():
url = "https://scrape.world/books"
html = get(url)
soup = Soup(html)
books = soup.find("div", {"class": "book-"})
return [parse(book) for book in books]
data = fetch_books()
books = pd.DataFrame(data, columns=["title", "price"])
books['date'] = pd.Timestamp("now")
books.to_sql('books', con, if_exists='append', index=False)
average = pd.read_sql("select title, round(avg(price),2) as average from books group by title", con)
df = pd.merge(books[['title', 'price']], average)
string = f"Current Prices:```\n{df.to_markdown(index=False, tablefmt='grid')}\n```"
response = client.chat_postMessage(
channel="books",
text=string
)
###Output
_____no_output_____
###Markdown
Schedule with `hickory schedule booksbot.py --every=30seconds` to monitor prices on a 30 second cadence. 09 Serverless (Lambda) Let's say we want to build an app that scrapes something every day and have it be scheduled on AWS Lambda. Here's what we'll schedule:
###Code
import json
import os
import sys
from urllib.request import Request, urlopen
import pandas as pd
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
def post(url, data):
data = bytes(json.dumps(data).encode("utf-8"))
request = Request(url=url, data=data, method="POST")
request.add_header("Content-type", "application/json; charset=UTF-8")
with urlopen(request) as response:
response = json.loads(response.read().decode("utf-8"))
return response
url = "https://scrape.world/demand"
tomorrow = (pd.Timestamp('today') + pd.Timedelta('1 day')).strftime("%Y-%m-%d %H:00")
temperature = 21
data = {"date": tomorrow, "temperature": temperature}
response = post(url, data)
text = f"{tomorrow=} demand will be ~{response['demand']} MW"
print(text)
###Output
tomorrow='2020-08-28 16:00' demand will be ~6420.0 MW
###Markdown
Etec Students Grades We will build an application that automates the capture of student grades, attendance data and other information in the NSA system of Centro Paula Souza (CPS). Fill in the variables with your information to be able to perform the automatic login. *TOOLS*:- Selenium: [https://www.selenium.dev/](https://www.selenium.dev/)- ChromeDriver: [https://chromedriver.chromium.org/downloads](https://chromedriver.chromium.org/downloads)- Time: [https://docs.python.org/3/library/time.html](https://docs.python.org/3/library/time.html)- Pandas: [https://pandas.pydata.org/docs/](https://pandas.pydata.org/docs/)- BeautifulSoup: [https://beautiful-soup-4.readthedocs.io/en/latest/](https://beautiful-soup-4.readthedocs.io/en/latest/)- NSA System: [https://nsa.cps.sp.gov.br/](https://nsa.cps.sp.gov.br/) *VARIABLES:*- RM: _Your RM Code_- CPS_CODE: _Your CPS School Code_- PASSWORD: _Your Password_ Lib Installation
###Code
!pip install selenium
!apt-get update
!apt install chromium-chromedriver
!pip install beautifulsoup4
###Output
_____no_output_____
###Markdown
Definition os Information Variables
###Code
# ------------------------------------------------ #
RM = "YOUR RM"
CPS_CODE = "YOUR SCHOOL CODE"
PASSWORD = "YOUR PASSWORD"
# ------------------------------------------------ #
###Output
_____no_output_____
###Markdown
Lib Imports
###Code
import time
import pandas as pd
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
###Output
_____no_output_____
###Markdown
Browser Window Configuration
###Code
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--headless")
chrome_options.add_argument("--disable-dev-shm-usage")
navegador = webdriver.Chrome(chrome_options= chrome_options)
###Output
_____no_output_____
###Markdown
Login to NSA using User information _(EtecCode, RM, Password)_
###Code
navegador.get("https://nsa.cps.sp.gov.br/")
navegador.find_element(By.XPATH, '//*[@id="txtCod"]').send_keys(CPS_CODE)
navegador.find_element(By.XPATH, '//*[@id="txtlogin"]').send_keys(RM)
navegador.find_element(By.XPATH, '//*[@id="txtSenha"]').send_keys(PASSWORD)
navegador.find_element(By.XPATH, '//*[@id="ctrlGoogleReCaptcha"]/div/div/div/iframe').click()
time.sleep(5)
navegador.find_element(By.XPATH, '//*[@id="btnEntrar"]').click()
time.sleep(2)
###Output
_____no_output_____
###Markdown
Get Information from the School Report
###Code
navegador.get("https://nsa.cps.sp.gov.br/alunos/frmmencoes.aspx")
element = navegador.find_element(By.XPATH, '//*[@id="ctl00_ContentPlaceHolder1_gvNotas"]')
html_content = element.get_attribute('outerHTML')
soup = BeautifulSoup(html_content, 'html.parser')
table = soup.find(name='table')
###Output
_____no_output_____
###Markdown
Transforming HTML to Data Frame with Pandas
###Code
data = pd.read_html(str(table))[0]
data
###Output
_____no_output_____
###Markdown
Convert the data to an Excel file
###Code
data.to_excel("GradesReport.xlsx", index=False)
###Output
_____no_output_____
###Markdown
Enviroment 00 Install [miniconda](https://docs.conda.io/en/latest/miniconda.html):```wget https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -O ~/miniconda.shbash ~/miniconda.sh -b -p $HOME/minicondasource $HOME/miniconda/bin/activateconda init zsh```Install [Atom](https://atom.io/)And get [Hydrogen](https://github.com/nteract/hydrogen):```Atom > Preferences > Install > {Search: Hydrogen} > Install``` Data 00 `Iris` and `Boston` are boring. So we're going to look at some energy data I scraped instead! First, some stats [Source](https://www.nrcan.gc.ca/sites/www.nrcan.gc.ca/files/energy/pdf/energy-factbook-oct2-2018%20(1).pdf): I live in Toronto, the biggest city in Canada. So I wanted to look at energy demand for my city. And I figured, given the facts, that weather would have a big impact on demand... Here's the data I scraped. It's for the last two years and represents total energy demand (in MW) by hour:
###Code
import pandas as pd
df = pd.read_csv('data/weather_power.csv')
df.head()
###Output
_____no_output_____
###Markdown
Model 01 Select and split... let's just use temperature for now (dates are a little bit more complicated):
###Code
from sklearn.model_selection import train_test_split
target = 'energy_demand'
y = df[target]
X = df[['temperature']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, shuffle=False)
###Output
_____no_output_____
###Markdown
> Max's Tip: you should try to get to a number as quickly as possible To get to a number ASAP (ie: a score to beat) we'll use [`DummyRegressor`](https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyRegressor.html):
###Code
from sklearn.dummy import DummyRegressor
model = DummyRegressor()
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
`DummyRegressor` will literally just predict the average for everything... doesn't matter what you put in:
###Code
model.predict(X_test)
###Output
_____no_output_____
###Markdown
You can confirm this by running:
###Code
y_train.mean()
###Output
_____no_output_____
###Markdown
But now we have a model that we can score so that we know what to beat:
###Code
from sklearn.metrics import mean_squared_error
round(mean_squared_error(y_test, model.predict(X_test)) ** (1/2))
###Output
_____no_output_____
###Markdown
Given that Toronto on an average day uses ~5700 MW of electricity having our model be off by ~1400 MW isn't super great! But it's a start! We're going work on continously improving this number. To run a prediction for a single row, or new entry, I like to send a row to a dictionary to get a sense of structure:
###Code
X.sample(1).to_dict(orient='list')
###Output
_____no_output_____
###Markdown
And then embed the output in a new pandas.DataFrame:
###Code
new = pd.DataFrame({'temperature': [21]})
model.predict(new)[0]
###Output
_____no_output_____
###Markdown
We're going to stuff new temperatures in our app into a pandas DataFrame exactly like that. But let's save the model so that we can wrap an app around it:
###Code
import pickle
with open('model.pkl', 'wb') as f:
pickle.dump(model, f)
###Output
_____no_output_____
###Markdown
Before moving on to building the app, let's make sure it works:
###Code
with open('model.pkl', 'rb') as f:
model = pickle.load(f)
model.predict(new)[0]
###Output
_____no_output_____
###Markdown
App 01 Now that we have a "model" (granted it's pretty dumb!) it's time to stick it behind an app. To start our app is just going to be a shitty hello world Flask app:
###Code
%%writefile app.py
from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return 'Hello World!'
if __name__ == '__main__':
app.run(port=5000, debug=True)
###Output
Overwriting app.py
###Markdown
Preview the app by running it from the command line:```python app.py```And visiting: http://127.0.0.1:5000/ Interrupt the process (ctrl+c) to kill the app and move on... App 02 Now that we have some boilerplate in place, we can extend the hello world example to include the model. While this looks like it might work, we're going to run it anyways...
###Code
%%writefile app.py
import pickle
from flask import Flask
import pandas as pd
app = Flask(__name__)
with open('model.pkl', 'rb') as f:
model = pickle.load(f)
@app.route('/')
def index():
new = pd.DataFrame({'temperature': [20]})
prediction = model.predict(new)
print(prediction)
return prediction
if __name__ == '__main__':
app.run(port=5000, debug=True)
###Output
Overwriting app.py
###Markdown
Again, fire up the app at the command line the app at the command line:```python app.py``` This time when you visit http://127.0.0.1:5000/ you'll get a: **TypeError** > TypeError: The view function did not return a valid response. The return type must be a string, dict, tuple, Response instance, or WSGI callable, but it was a ndarray. Interrupt the process so that we can work on a fix. App 03 The TypeError helpful suggested that we could return a dictionary and it would work... let's do that now:
###Code
%%writefile app.py
import pickle
from flask import Flask
import pandas as pd
app = Flask(__name__)
with open('model.pkl', 'rb') as f:
model = pickle.load(f)
@app.route('/')
def index():
new = pd.DataFrame({'temperature': [20]})
prediction = model.predict(new)[0]
# return str(prediction)
return {'prediction': prediction}
if __name__ == '__main__':
app.run(port=5000, debug=True)
###Output
Overwriting app.py
###Markdown
Alternatively, we could `return str(prediction)`. But let's run the app at the command line and preview it once more. This time you should see the dictonary return. App 04 You'll notice that while our model is embedded in our app it's just returning the prediction temperature=20 on every single request.To make our app dynamic, we'll use "query params"... they look like:`http://website.com/endpoint?query=string`Query params will allow us to accept different inputs and pass them to our model via a pandas DataFrame... In order to capture query params, though, we need to import `flask.request` so that we can peel off dictionary keys from the `request.args` object...As we do this, let's rejig the app to have a separate temperature endpoint:
###Code
%%writefile app.py
import pickle
from flask import Flask, request
import pandas as pd
app = Flask(__name__)
with open('model.pkl', 'rb') as f:
model = pickle.load(f)
@app.route('/')
def index():
return 'Use the /predict endpoint'
@app.route('/predict')
def predict():
query = request.args
print(query)
new = pd.DataFrame({'temperature': [20]})
prediction = model.predict(new)[0]
return {'prediction': prediction}
if __name__ == '__main__':
app.run(port=5000, debug=True)
###Output
Overwriting app.py
###Markdown
Run the app and watch the command line when you hit it with different query strings like this: - http://127.0.0.1:5000/predict?hi=there&name=max- http://127.0.0.1:5000/predict?even=more&query=strings- http://127.0.0.1:5000/predict?temperature=25 You should see something like `ImmutableMultiDict([('hi', 'there'), ('name', 'max')])` print to the console. Don't worry it's basically just a dictionary... App 05 To *actually* connect the temperature query string to the model, let's grab it off `request.args` and format it as a float:
###Code
%%writefile app.py
import pickle
from flask import Flask, request
import pandas as pd
app = Flask(__name__)
with open('model.pkl', 'rb') as f:
model = pickle.load(f)
@app.route('/')
def index():
return 'Use the /predict endpoint'
@app.route('/predict')
def predict():
query = request.args
temperature = float(query.get('temperature'))
print(temperature)
new = pd.DataFrame({'temperature': [temperature]})
prediction = model.predict(new)[0]
return {'prediction': prediction}
if __name__ == '__main__':
app.run(port=5000, debug=True)
###Output
Overwriting app.py
###Markdown
Preview the app at the command line and hit the url with some temperatures:- http://127.0.0.1:5000/predict?temperature=25- http://127.0.0.1:5000/predict?temperature=-10- http://127.0.0.1:5000/predict?temperature=5 You'll notice that the print statement in the console is registering the different values, but the model is just returning the same thing.Well, that shouldn't surprise, model is dumb! Let's fix the dummy model we started with right now... Model 02 The `DummyRegressor` helped us quickly build an app, and gave us a number (in the form of RMSE) to beat, let's see if we can beat it with `LinearRegression`:
###Code
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
```pythonmodel = LinearRegression()model.fit(X_train, y_train)``` Unfortunately, this breaks:```-----------------------------------------------------------------------ValueError Traceback (most recent call last) in 2 3 model = LinearRegression()----> 4 model.fit(X_train, y_train)ValueError: Input contains NaN, infinity or a value too large for dtype('float64').```This is because our data has some NaNs in the temperature column:
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 23040 entries, 0 to 23039
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 23040 non-null object
1 temperature 22873 non-null float64
2 energy_demand 23040 non-null int64
dtypes: float64(1), int64(1), object(1)
memory usage: 540.1+ KB
###Markdown
Model 03 Not to worry, dealing with NaNs is a cinch with `DataFrameMapper` from [sklearn-pandas](https://github.com/scikit-learn-contrib/sklearn-pandas) (basically my secret weapon/super power).DataFrameMapper accepts a list of tuples, where each tuple identifies the column name and then the transformer that operates on it:
###Code
from sklearn.impute import SimpleImputer
from sklearn_pandas import DataFrameMapper
mapper = DataFrameMapper([
('temperature', SimpleImputer())
], df_out=True)
###Output
_____no_output_____
###Markdown
While it works as a first class scikit-learn transformer (complete with `fit`, `transform`, and `fit_transform`) it comes with all of scikit-learns finickiness: ```pythonmapper.fit_transform(X_train)``` Running this right now will hit a **Value Error**: ```----------------------------------------------------------------ValueError Traceback (most recent call last) in ----> 1 mapper.fit_transform(X_train)~/opt/miniconda3/lib/python3.8/site-packages/sklearn_pandas/dataframe_mapper.py in fit_transform(self, X, y) 393 y the target vector relative to X, optional 394 """--> 395 return self._transform(X, y, True)ValueError: temperature: Expected 2D array, got 1D array instead:array=[-16.9 -16.3 -17.6 ... 9.1 8.7 8.3].Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.``` This is because of how scikit-learn is designed. Without going into the nitty-gritty, this error has to do with the difference between how scikit-learn handles strings and numbers and columns and series. Not to worry, errors like this in DataFrameMapper are easily fixed by wrapping the pesky column in square brackets:
###Code
mapper = DataFrameMapper([
(['temperature'], SimpleImputer())
], df_out=True)
mapper.fit_transform(X_train)
###Output
_____no_output_____
###Markdown
Model 04 Using the DataFrameMapper, we can now transform our `X` objects to intermediate `Z` objects... > Max's Tip: You could just overwrite the `X`s but I like `Z`s becauxe they remind me "HEY I DID SOMETHING TO THIS!~"
###Code
Z_train = mapper.fit_transform(X_train)
Z_test = mapper.transform(X_test)
model = LinearRegression()
model.fit(Z_train, y_train)
###Output
_____no_output_____
###Markdown
Now let's score the model again:
###Code
round(mean_squared_error(y_test, model.predict(Z_test)) ** (1/2))
###Output
_____no_output_____
###Markdown
And notice that this new model beats the dummy, albeit by not much. Peeking at some examples we can see that it's still pretty constrained around the mean:
###Code
pd.DataFrame({
'y_true': y_test,
'y_hat': model.predict(Z_test)
}).sample(10)
###Output
_____no_output_____
###Markdown
Model 05 While our model still isn't amazing, it is an improvement, and a little more dynamic, so let's "serialize" the mapper and model:
###Code
with open('mapper.pkl', 'wb') as f:
pickle.dump(mapper, f)
with open('model.pkl', 'wb') as f:
pickle.dump(model, f)
###Output
_____no_output_____
###Markdown
But, wait up, this isn't super ideal because in this paradigm we have to keep track of and load two separate things:
###Code
with open('mapper.pkl', 'rb') as f:
mapper = pickle.load(f)
with open('model.pkl', 'rb') as f:
model = pickle.load(f)
###Output
_____no_output_____
###Markdown
And in order to predict we need to remember to transform first:
###Code
new = pd.DataFrame({'temperature': [21]})
Z_new = mapper.transform(new)
model.predict(Z_new)[0]
###Output
_____no_output_____
###Markdown
We can actually make things a little more simple... Model 06 Enter pipelines... a scikit-learn tool that will ensure that we only have to manage one thing!And, as an added bonus, using pipeline will make it so that we can get rid of the intermediate `Z` objects at the same time!
###Code
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(mapper, model)
pipe.fit(X_train, y_train);
###Output
_____no_output_____
###Markdown
Now we only have to dump one thing:
###Code
with open('pipe.pkl', 'wb') as f:
pickle.dump(pipe, f)
###Output
_____no_output_____
###Markdown
And load one object:
###Code
with open('pipe.pkl', 'rb') as f:
pipe = pickle.load(f)
###Output
_____no_output_____
###Markdown
Predictions don't require transformation now (because it happens inside the pipeline):
###Code
pipe.predict(new)[0]
###Output
_____no_output_____
###Markdown
In order to deploy this new pipeline model let's wrap everything into a single file...
###Code
%%writefile model.py
import pickle
import pandas as pd
from sklearn_pandas import DataFrameMapper
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
df = pd.read_csv('data/weather_power.csv')
target = 'energy_demand'
y = df[target]
X = df[['temperature']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, shuffle=False)
mapper = DataFrameMapper([
(['temperature'], SimpleImputer())
], df_out=True)
model = LinearRegression()
pipe = make_pipeline(mapper, model)
pipe.fit(X_train, y_train)
with open('pipe.pkl', 'wb') as f:
pickle.dump(pipe, f)
###Output
Overwriting model.py
###Markdown
App 06Just a quick change to our app... we need to reference `pipe` and not `model` anymore:
###Code
%%writefile app.py
import pickle
from flask import Flask, request
import pandas as pd
app = Flask(__name__)
with open("pipe.pkl", 'rb') as f:
pipe = pickle.load(f)
@app.route("/")
def index():
return 'Use the /predict endpoint'
@app.route("/predict")
def predict():
query = request.args
temperature = float(query.get('temperature'))
new = pd.DataFrame({'temperature': [temperature]})
prediction = pipe.predict(new)[0]
return {'prediction': prediction}
if __name__ == "__main__":
app.run(port=5000, debug=True)
###Output
Overwriting app.py
###Markdown
Deploy 01 (Heroku) We're going to deploy our model and app on Heroku. In order to do that, follow these simple steps: 1. Setup a virtual environment:```python -m venv .venv```2. Activate it:```source .venv/bin/activate```3. Install the app and model dependencies (`gunicorn` something that will sit between heroku and our app...):```pip install gunicorn flask scikit-learn pandas sklearn_pandas```3. Freeze the dependencies:```pip freeze > requirements.txt```4. Retrain the model inside of the virtual environment:```python model.py```5. Make sure the app still works locally:```python app.py```6. Specify a python runtime (3.8 not working yet):```python --versionecho "python-3.7.9" > runtime.txt```7. Create a `Procfile`:```echo "web: gunicorn app:app" > Procfile```8. (Optional) If your project isn't already a git repo, make it one:```git inittouch .gitignoreecho ".venv" >> .gitignore```9. Login to Heroku from the [command line](https://devcenter.heroku.com/articles/heroku-cli):```heroku login```10. Create a project:```heroku create```11. Add a remote to the randomly generated project:```heroku git:remote -a silly-words-009900```12. Test the app locally:```heroku local```13. add, commit push:```git add .git commit -m '🚀'git push heroku```14. Hit the url and make sure it works!- http://\/predict?temperature=2015. Make sure nothing is wrong (check the logs!):```heroku logs -t ``` App 07 Well! We have an app deployed. And it's working... but now it's time to level-up. Starting with the actual app. Flask is old news....FastAPI is the new kid on the block. It's faster, less boilerplate, and the future. And converting from Flask to FastAPI isn't that tough:
###Code
%%writefile app.py
import pickle
import pandas as pd
import uvicorn
from fastapi import FastAPI
app = FastAPI()
with open('pipe.pkl', 'rb') as f:
pipe = pickle.load(f)
@app.get('/')
def index():
return 'Use the /predict endpoint with a temperature argument'
@app.get('/predict')
def predict(temperature: float):
new = pd.DataFrame({'temperature': [temperature]})
prediction = pipe.predict(new)[0]
return {'prediction': prediction}
if __name__ == '__main__':
uvicorn.run(app)
###Output
Overwriting app.py
###Markdown
The main differences, `route`s are now `get`s and we can specify types in the endpoint functions! To run this fastAPI at the command line use:```uvicorn app:app --port 5000 --reload```> Tip: The first app in `app:app` is the file name, and the second app is the name of the app inside the file. These can be whatever! Try it out and make sure it still works:- http://127.0.0.1:5000/predict?temperature=20As a bonus there's also some wicked auto-generate docs at:- http://127.0.0.1:5000/docsKill the app once you've verified it works to move on... Deploy 02 (Heroku) To deploy fastAPI to Heroku isn't much more work because we already have an environment setup.0. If you're not in your environment anymore you can re-enter with:```source .venv/bin/activate don't do it now, but to exit the environment you can call:deactivate```1. Install the new dependencies:```pip install uvicorn fastapi```2. Freeze the dependencies:```pip freeze > requirements.txt```3. Retrain the model inside the virtual environment:```python model.py```4. Make sure the app still works locally:```uvicorn app:app --port=5000```5. Create a new `Procfile`:```echo 'web: uvicorn app:app --host=0.0.0.0 --port=${PORT:-5000}' > Procfile```6. Test the app locally:```heroku local```7. add, commit push:```git add .git commit -m '🚀'git push heroku```8. Click on the url and make sure it works!- http://\/predict?temperature=20 Model 07 We upgraded our app, but our model is still stuck in the dark ages...
###Code
from matplotlib import pyplot as plt
plt.plot(range(len(y_test)), y_test, label='MW')
plt.plot(range(len(y_test)), model.predict(Z_test), label='MW (Predicted)')
plt.legend();
###Output
_____no_output_____
###Markdown
It barely explains anything!But temperature should be able to explain a lot (as it pertains to energy demand):
###Code
plt.scatter(df['temperature'], df['energy_demand'], alpha=1/20);
###Output
_____no_output_____
###Markdown
The problem is, temperature and energy demand isn't linear. When it's cold, we heat our homes, and when it's hot, we cool our homes! Temperature has a parabolic relationship to energy!Whenever we see a "U" or a rainbow, we should think `PolynomialFeatures`. Adding Poly to our pipeline is a matter of slightly adjusting the DataFrameMapper:
###Code
from sklearn.preprocessing import PolynomialFeatures
mapper = DataFrameMapper([
(['temperature'], [SimpleImputer(), PolynomialFeatures(degree=2, include_bias=False)])
], df_out=True)
mapper.fit_transform(X_train)
###Output
_____no_output_____
###Markdown
In a full pipeline it looks like this:
###Code
pipe = make_pipeline(mapper, model)
pipe.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
And this time our model scores:
###Code
round(mean_squared_error(y_test, pipe.predict(X_test)) ** (1/2))
###Output
_____no_output_____
###Markdown
And explains over half of the variance in energy demand:
###Code
pipe.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
We should be pretty excited. While we still have a lot of work to do, let's wrap this modified code into a `model.py` file so that we can redeploy!
###Code
%%writefile model.py
import pickle
import pandas as pd
from sklearn_pandas import DataFrameMapper
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
df = pd.read_csv('data/weather_power.csv')
target = 'energy_demand'
y = df[target]
X = df[['temperature']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, shuffle=False)
mapper = DataFrameMapper([
(['temperature'], [SimpleImputer(), PolynomialFeatures(degree=2, include_bias=False)])
], df_out=True)
model = LinearRegression()
pipe = make_pipeline(mapper, model)
pipe.fit(X_train, y_train)
with open('pipe.pkl', 'wb') as f:
pickle.dump(pipe, f)
###Output
Overwriting model.py
###Markdown
Deploy 03 (Heroku) As no dependencies have changed, it's going to be a lot easier to push a change, just... 1. Retrain the model inside of the virtual environment:```python model.py```2. Test the app locally:```heroku local```3. Try a bucnch of different temperatures:- http://0.0.0.0:5000/predict?temperature=30- http://0.0.0.0:5000/predict?temperature=21- http://0.0.0.0:5000/predict?temperature=-104. Kill the app to move on...5. add, commit push:```git add .git commit -m '🚀'git push heroku```6. Click on the url and make sure it works!- http://\/predict?temperature=20 Model 08 Right now our model is just using temperature, and although it does alright:
###Code
plt.plot(range(len(y_test)), y_test, label='MW')
plt.plot(range(len(y_test)), pipe.predict(X_test), label='MW (Predicted)')
plt.legend();
###Output
_____no_output_____
###Markdown
We can do better. We have a date column with timestamps! There has to be some good single in there. To get at it let's first convert the date column from a string to a date:
###Code
df['date'] = pd.to_datetime(df['date'])
###Output
_____no_output_____
###Markdown
Alternatively, we can parse the date column (0th position) on import, which is actually prefered:
###Code
df = pd.read_csv('data/weather_power.csv', parse_dates=[0])
###Output
_____no_output_____
###Markdown
The most interest and easiest date signals to capture are month, weekday and hour:
###Code
col = df['date']
pd.concat([col.dt.month, col.dt.weekday, col.dt.hour], axis=1)
###Output
_____no_output_____
###Markdown
To embed these new signals in our app let's recut our `X`s and `y`s:
###Code
target = 'energy_demand'
y = df[target]
X = df[['date', 'temperature']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, shuffle=False)
###Output
_____no_output_____
###Markdown
And build a scikit-learn transformer to handle the date features for us:
###Code
from sklearn.base import TransformerMixin
class DateEncoder(TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
return pd.concat([X.dt.month, X.dt.weekday, X.dt.hour], axis=1)
###Output
_____no_output_____
###Markdown
This way we can apply it on the `X_train` and `X_test` objects:
###Code
DateEncoder().fit_transform(X_train['date'])
###Output
_____no_output_____
###Markdown
And easily embed it in our mapper framework:
###Code
mapper = DataFrameMapper([
('date', DateEncoder(), {'input_df': True}),
(['temperature'], [SimpleImputer(), PolynomialFeatures(degree=2, include_bias=False)])
], df_out=True)
model = LinearRegression()
pipe = make_pipeline(mapper, model)
pipe.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
This new model shaves even more off our RMSE score (remember how we started off by ~1400 MW!!):
###Code
mean_squared_error(y_test, pipe.predict(X_test)) ** (1/2)
###Output
_____no_output_____
###Markdown
And explains even more variance:
###Code
pipe.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
The prediction plot shows that dates signals really helped:
###Code
plt.plot(range(len(y_test)), y_test, label='MW')
plt.plot(range(len(y_test)), pipe.predict(X_test), label='MW (Predicted)')
plt.legend();
###Output
_____no_output_____
###Markdown
Unfortunately, because of how pickle works, we have move the `DateEncoder` to a `utils.py` file (and import it separately in our app... bummer, I know):
###Code
%%writefile utils.py
import pandas as pd
from sklearn.base import TransformerMixin
class DateEncoder(TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
return pd.concat([X.dt.month, X.dt.weekday, X.dt.hour], axis=1)
###Output
Overwriting utils.py
###Markdown
The full working model should be written to a new `model.py` file so that we can can retrain within our environment:
###Code
%%writefile model.py
import pickle
import pandas as pd
from sklearn_pandas import DataFrameMapper
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
from utils import DateEncoder # CUSTOM MODULE IMPORT
df = pd.read_csv('data/weather_power.csv', parse_dates=[0])
target = 'energy_demand'
y = df[target]
X = df[['date', 'temperature']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, shuffle=False)
mapper = DataFrameMapper([
('date', DateEncoder(), {'input_df': True}),
(['temperature'], [SimpleImputer(), PolynomialFeatures(degree=2, include_bias=False)])
], df_out=True)
model = LinearRegression()
pipe = make_pipeline(mapper, model)
pipe.fit(X_train, y_train)
with open('pipe.pkl', 'wb') as f:
pickle.dump(pipe, f)
###Output
Overwriting model.py
###Markdown
App 08 To integrate this new model into our fastAPI app, add the custom `DateEncoder` import and and adjust the index endpoint function to accept a dictionary.Might as well make it post request at the same time!
###Code
%%writefile app.py
import pickle
import pandas as pd
import uvicorn
from fastapi import FastAPI
from typing import Dict
import os
from utils import DateEncoder
app = FastAPI()
with open('pipe.pkl', 'rb') as f:
pipe = pickle.load(f)
@app.post('/')
def index(json_data: Dict):
new = pd.DataFrame({
'date': [pd.Timestamp(json_data.get('date'))],
'temperature': [float(json_data.get('temperature'))]
})
prediction = pipe.predict(new)[0]
return {'prediction': prediction}
if __name__ == '__main__':
uvicorn.run(app)
###Output
Overwriting app.py
###Markdown
Post Carrier Function Converting our app from GET to POST requests, means that we lose the query string paradigm. In order to interoperate with this new app, we'll need a function to send data to the endpoint and parse the response. This is what I use:
###Code
import json
from urllib.request import Request, urlopen
import pandas as pd
def post(url, data):
data = bytes(json.dumps(data).encode("utf-8"))
request = Request(
url=url,
data=data,
method="POST"
)
request.add_header("Content-type", "application/json; charset=UTF-8")
with urlopen(request) as response:
data = json.loads(response.read().decode("utf-8"))
return data
###Output
_____no_output_____
###Markdown
Deploy 04 (Dokku) While Heroku is great. It's expensive, and sleeps after awhile. It's time to graduate to Dokku (a Heroku Open Source alternative, and overall just way better). Deploying to Dokku is a little more involved but I promise it's worth it!Rerun the model:```python model.py```Make sure it still works locally:```uvicorn app:app --port 5000 --reload```And our new "post carrier" function to test:
###Code
data = {
"date": str(pd.Timestamp('now')),
"temperature": 25
}
###Output
_____no_output_____
###Markdown
```pythonpost("http://127.0.0.1:5000", data)``` Once you've confirmed that it still works, let's get working on Dokku. My deploy Dokku to setup:1. Sign up for a [DigitalOcean](https://m.do.co/c/2909cd1f3f10) account2. Spin up a $5 Ubuntu 20/18.04 server...3. ssh into it:```ssh [email protected]```4. (Strongly advised) Update everything:```sudo apt updatesudo apt -y upgrade```5. Setup a firewall:````ufw app listufw allow OpenSSHufw enable````6. Add some rules ([source](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-with-ufw-on-ubuntu-18-04)):```sudo ufw default deny incomingsudo ufw default allow outgoingsudo ufw allow sshsudo ufw allow 22sudo ufw allow httpsudo ufw allow https```7. Install dokku (be patient... takes about 5 minutes):```wget https://raw.githubusercontent.com/dokku/dokku/v0.21.4/bootstrap.shsudo DOKKU_TAG=v0.21.4 bash bootstrap.sh```**THIS STEP IS IMPORTANT!**8. Visit the Droplet’s IP address in a browser to finish configuring Dokku9. Copy and paste your ssh key from your **laptop** into the config window:```cat .ssh/id_rsa.pub```10. And add the IP of the server to the hostname:`142.XXX.153.207`11. Click "Finish Setup"...12. Go back to the server terminal and create a dokku app on the server (I'm calling this one `powerapp`):```dokku apps:create powerappdokku domains:enable powerapp```**On Laptop**12. Add dokku as a remote:```git remote add dokku [email protected]:powerapp```13. Verify that the remote got added:```git remote -v```14. Push it up (for every new change just run these commands):```git add .git commit -m '🤞'git push dokku```15. Test if it works with the post function: ```python post("http://142.93.153.207", data)``` Model 09 Okay, time to graduate to some deep learning. Unfortunately Tensorflow isn't supported by Heroku. But we aren't deploying to Heroku anymore, we've got Dokku and it works just fine!Using a `Sequential` I'm going to build a simple neural net with two layers:
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Input
mapper = DataFrameMapper([
('date', DateEncoder(), {'input_df': True}),
(['temperature'], [SimpleImputer(), PolynomialFeatures(degree=2, include_bias=False)])
], df_out=True)
Z_train = mapper.fit_transform(X_train)
Z_test = mapper.transform(X_test)
model = Sequential()
model.add(Input(shape=(Z_train.shape[1],)))
model.add(Dense(10, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(
loss='mean_squared_error',
optimizer='adam',
metrics=[tf.keras.metrics.RootMeanSquaredError()]
)
model.fit(Z_train, y_train, epochs=100, verbose=0)
###Output
_____no_output_____
###Markdown
Evaluating the model we can see that it crushes our scikit-learn model:
###Code
model.evaluate(Z_test, y_test)
###Output
72/72 [==============================] - 0s 493us/step - loss: 255762.5625 - root_mean_squared_error: 505.7297
###Markdown
Explaining over 85% of the variance!!
###Code
from sklearn.metrics import r2_score
r2_score(y_test, model.predict(Z_test)), r2_score(y_test, model.predict(Z_test))
###Output
_____no_output_____
###Markdown
Which, honestly is pretty incredible for just a date and temperature feature To predict a new row we have to chain together the mapper and the model:
###Code
new = pd.DataFrame({
'date': [pd.Timestamp('now')],
'temperature': [17]
})
model.predict(mapper.transform(new))[0][0]
###Output
_____no_output_____
###Markdown
Look at how dope this is:
###Code
plt.figure(figsize=(10, 5))
plt.plot(X_test['date'], y_test, alpha=1/2, label='MW');
plt.plot(X_test['date'], model.predict(Z_test).flatten(), alpha=1/2, label='MW (Predicted)');
plt.legend();
###Output
_____no_output_____
###Markdown
Model 10 I don't like how we have to keep track of a bunch of things, and do a transform before prediction. To get around that let's bring in a `KerasRegressor` Wrapper and a SelectKBest feature transformer (to make sure that our shape is always consistent... not really needed here, but a good idea):
###Code
from tensorflow.keras.wrappers.scikit_learn import KerasRegressor
from tensorflow.keras.models import load_model
from sklearn.feature_selection import SelectKBest
Z_train = mapper.fit_transform(X_train)
Z_test = mapper.transform(X_test)
columns = 5
select = SelectKBest(k=columns)
select.fit_transform(Z_train, y_train)
def nn():
columns = 5
m = Sequential()
m.add(Input(shape=(columns,)))
m.add(Dense(10, activation='relu'))
m.add(Dense(10, activation='relu'))
m.add(Dense(1))
m.compile(
loss='mean_squared_error',
optimizer='adam',
metrics=[tf.keras.metrics.RootMeanSquaredError()]
)
return m
model = KerasRegressor(nn, epochs=100, verbose=0)
###Output
_____no_output_____
###Markdown
Now our pipeline can transform and predict like we're use to:
###Code
pipe = make_pipeline(mapper, select, model)
pipe.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Rolling a new prediction on a new datapoint just looks like this now:
###Code
new = pd.DataFrame({
'date': [pd.Timestamp('now')],
'temperature': [17]
})
float(pipe.predict(new))
###Output
_____no_output_____
###Markdown
Unfortunately, serializing is a bit of a headache. We can't just dump the pipeline as we have been, we first have to dump the TF model weights, delete the network and then dump:
###Code
pipe.named_steps['kerasregressor'].model.save('model.h5')
pipe.named_steps['kerasregressor'].model = None
with open('pipe.pkl', 'wb') as f:
pickle.dump(pipe, f)
###Output
_____no_output_____
###Markdown
To load it all back up, it'll look like this:
###Code
with open('pipe.pkl', 'rb') as f:
pipe = pickle.load(f)
pipe.named_steps['kerasregressor'].model = load_model('model.h5')
###Output
_____no_output_____
###Markdown
But in our app, we'll just have to call:
###Code
float(pipe.predict(new))
###Output
_____no_output_____
###Markdown
Just like we had to move DateEncoder to a separate utils.py module, so to do we have to the same thing for the `nn` function definition:
###Code
%%writefile utils.py
import pandas as pd
from sklearn.base import TransformerMixin
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Input
class DateEncoder(TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
return pd.concat([X.dt.month, X.dt.weekday, X.dt.hour], axis=1)
def nn():
columns = 5
m = Sequential()
m.add(Input(shape=(columns,)))
m.add(Dense(10, activation='relu'))
m.add(Dense(10, activation='relu'))
m.add(Dense(1))
m.compile(
loss='mean_squared_error',
optimizer='adam',
metrics=[tf.keras.metrics.RootMeanSquaredError()]
)
return m
###Output
Overwriting utils.py
###Markdown
Now let's write the full model:
###Code
%%writefile model.py
import pickle
import pandas as pd
from sklearn_pandas import DataFrameMapper
from sklearn.model_selection import train_test_split
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.feature_selection import SelectKBest
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.wrappers.scikit_learn import KerasRegressor
from tensorflow.keras.models import load_model
from utils import DateEncoder, nn
df = pd.read_csv('data/weather_power.csv', parse_dates=[0])
target = 'energy_demand'
y = df[target]
X = df[['date', 'temperature']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, shuffle=False)
mapper = DataFrameMapper([
('date', DateEncoder(), {'input_df': True}),
(['temperature'], [SimpleImputer(), PolynomialFeatures(degree=2, include_bias=False)])
], df_out=True)
columns = 5
select = SelectKBest(k=columns)
model = KerasRegressor(nn, epochs=100, batch_size=32, verbose=0)
pipe = make_pipeline(mapper, select, model)
pipe.fit(X_train, y_train)
pipe.named_steps['kerasregressor'].model.save('model.h5')
pipe.named_steps['kerasregressor'].model = None
with open('pipe.pkl', 'wb') as f:
pickle.dump(pipe, f)
###Output
Overwriting model.py
###Markdown
App 9+10 We beefed up our model real good. Let's do the same time to our app!We'll add in the tensorflow model and turn our function into an `async` function and lean on `pydantic` to more easily parse the `RequestData` in the post request:
###Code
%%writefile app.py
import os
import pickle
from fastapi import FastAPI
import uvicorn
from typing import Dict
from pydantic import BaseModel
import pandas as pd
from tensorflow.keras.models import load_model
from utils import DateEncoder, nn
app = FastAPI()
with open('pipe.pkl', 'rb') as f:
pipe = pickle.load(f)
pipe.named_steps['kerasregressor'].model = load_model('model.h5')
class RequestData(BaseModel):
date: str
temperature: float
@app.post('/')
async def index(request: RequestData): # add async and RequestData
new = pd.DataFrame({
'date': [pd.Timestamp(request.date)],
'temperature': [request.temperature]
})
prediction = float(pipe.predict(new))
return {'prediction': prediction}
if __name__ == '__main__':
uvicorn.run(app)
###Output
Overwriting app.py
###Markdown
Deploy 05 (Dokku) To deploy these new changes...1. Update environment:```pip install tensorflow```2. Freeze the dependencies:```pip freeze > requirements.txt```3. Retrain the model inside of the virtual environment:```python model.py```4. Make sure the app still works locally:```uvicorn app:app --port 5000 --reload```5. Push everything up to GitHub:```git add .git commit -m '🚀'git push dokku```6. Check logs on server (to make sure it all works)```dokku logs powerapp --tail```7. Test with the post function:
###Code
data = {
"date": str(pd.Timestamp('now')),
"temperature": 20
}
###Output
_____no_output_____
|
toxic_spans.ipynb
|
###Markdown
Finding toxic words is done. Now we need to locate them in lines to convert it into a token classification task.
###Code
def remove_whitespace(text):
whitespace = re.compile(r"\s+")
return whitespace.sub(" ", text).strip()
def remove_ascii(text):
return (text.encode('ascii', 'ignore')).decode("utf-8")
# lowercasing, removing white spaces, removing ascii characters to input into our model
# Train data
lines = lines.apply(lambda text: text.lower())
lines = lines.apply(lambda text: remove_whitespace(text))
lines = lines.apply(lambda text: remove_ascii(text))
print(lines[2602])
print("**********************************************")
lines_original = lines_original.apply(lambda text: text.lower())
print(lines_original[2602])
print("\n#################################################################################################################\n")
#trial data
lines_trial = lines_trial.apply(lambda text: text.lower())
lines_trial = lines_trial.apply(lambda text: remove_whitespace(text))
lines_trial = lines_trial.apply(lambda text: remove_ascii(text))
print(lines_trial[260])
print("**********************************************")
lines_original_trial = lines_original_trial.apply(lambda text: text.lower())
print(lines_original_trial[260])
#Splitting lines into words
#Train Data
lines_split = lines.apply(lambda text: text.split())
print(len(lines_split[0]))
print(lines_split[0])
print("\n#################################################################################################################\n")
#Trial Data
lines_split_trial = lines_trial.apply(lambda text: text.split())
print(len(lines_split_trial[260]))
print(lines_split_trial[260])
# For empty strings that get stored in some places after remove punctuation step carried next.
def data_leak(word):
if word == '':
word = "p"
return word
# Cleaning seperated words
def split_filter(text):
text_array = pd.Series(text)
text_array = text_array.apply(lambda word: remove_punctuation(word))
# data problem solution casued by removing punctuation as some strings were purely punctuation.
text_array = text_array.apply(lambda word: data_leak(word))
if(len(text) != len(text_array)):
print("Length mismatch")
return np.asarray(text_array)
# Train data
lines_split_no_punct = lines_split.apply(lambda l: split_filter(l))
print(lines_split[0])
print(lines_split_no_punct[0])
print("\n#################################################################################################################\n")
#Trial data
lines_split_no_punct_trial = lines_split_trial.apply(lambda l: split_filter(l))
print(lines_split_trial[260])
print(lines_split_no_punct_trial[260])
###Output
_____no_output_____
###Markdown
Text pre-processing is done. We now create labels using the toxic words found on cleaned text
###Code
def create_label(toxic_word, word_array):
words = list(set(toxic_word))
label = np.zeros((len(word_array)))
for word in words:
positions = (np.where(word_array == word))
for position in positions:
label[position] = 1
return label
train_tags = []
for i in range(0,len(lines_split_no_punct)):
tag = create_label(toxic_words[i], lines_split_no_punct[i])
train_tags.append(tag)
index = 2
print(lines_split_no_punct[index])
print(train_tags[index])
print(toxic_words[index])
# Converting from np array to lists for tokenizer
# Train Data
train_texts = list(lines_split_no_punct)
for i in range(0,len(train_texts)):
train_texts[i] = list(train_texts[i])
print("Length of Train data: ",np.shape(train_texts))
# Trial Data
trial_texts = list(lines_split_no_punct_trial)
for i in range(0,len(trial_texts)):
trial_texts[i] = list(trial_texts[i])
print("Length of Trial data: ",np.shape(trial_texts))
# Finding length of sequences (hyper parameter for neural network.)
u = lambda text: len(text.split(" "))
sentence_lengths = []
for x in train_texts:
sentence_lengths.append(len(x))
print(sorted(sentence_lengths)[-50:])
print(len(sentence_lengths))
###Output
_____no_output_____
###Markdown
Token classification task
###Code
from transformers import TFMPNetModel, MPNetTokenizerFast, XLNetTokenizerFast, TFXLNetModel, AlbertTokenizerFast, TFMT5EncoderModel, TFAlbertModel, TFT5EncoderModel, T5TokenizerFast, TFT5Model, RobertaTokenizerFast, TFRobertaModel, AutoTokenizer, TFXLMRobertaModel, TFBertModel, BertTokenizerFast, TFElectraModel, ElectraTokenizerFast
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import backend as K
from tensorflow.keras.callbacks import ModelCheckpoint
from official import nlp
import official.nlp.optimization
from sklearn.metrics import classification_report
# Use tokenizer as required. Remove add_prefic_space for other tokenizers apart from roberta
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base", add_prefix_space=True)
# Train set
train_encodings = tokenizer(train_texts, is_split_into_words=True, padding=True, truncation=True)
# Trial set (max length is set for different tokenizers some returned less than 250)
trial_encodings = tokenizer(trial_texts, max_length=250, is_split_into_words=True, padding="max_length", truncation=True)
print(np.shape(train_encodings.input_ids))
print(np.shape(trial_encodings.input_ids))
# Make labels compatible as per tokeniser split and returns training masks for prediction.
def encode_tags(tags, encodings):
label_all_tokens = False
encoded_labels = []
masks = []
for i in range(0, len(tags)):
if( i%1000 == 0):
print(str(i) + "...")
label = tags[i]
# print(label)
word_ids = encodings[i].word_ids
# print(word_ids)
previous_word_idx = None
label_ids = []
mask_ids = []
for word_idx in word_ids:
# Special tokens have a word id that is None. We set the label to -100 so they are automatically
# ignored in the loss function.
if word_idx is None:
label_ids.append(-100)
mask_ids.append(0)
# We set the label for the first token of each word.
elif word_idx != previous_word_idx:
label_ids.append(label[word_idx])
mask_ids.append(1)
# For the other tokens in a word, we set the label to either the current label or -100, depending on
# the label_all_tokens flag.
else:
label_ids.append(label[word_idx] if label_all_tokens else -100)
mask_ids.append(label[word_idx] if label_all_tokens else 0)
previous_word_idx = word_idx
# print(label_ids)
# print(mask_ids)
# print()
encoded_labels.append(label_ids)
masks.append(mask_ids)
return (encoded_labels, masks)
train_labels, train_masks = encode_tags(train_tags, train_encodings)
index = 3
print(len(train_tags[index]))
print(len(train_texts[index]))
# Returns masks for trial/test data as per tokenizer
def get_masks(texts, encodings):
label_all_tokens = False
masks = []
for i in range(0, len(texts)):
if(i%100 == 0):
print(i,"...")
word_ids = encodings[i].word_ids
# print(word_ids)
previous_word_idx = None
mask_ids = []
for word_idx in word_ids:
# Special tokens have a word id that is None. We set the label to -100 so they are automatically
# ignored in the loss function.
if word_idx is None:
mask_ids.append(0)
# We set the label for the first token of each word.
elif word_idx != previous_word_idx:
mask_ids.append(1)
# For the other tokens in a word, we set the label to either the current label or -100, depending on
# the label_all_tokens flag.
else:
mask_ids.append(label[word_idx] if label_all_tokens else 0)
previous_word_idx = word_idx
# print(mask_ids)
# print()
masks.append(mask_ids)
return (masks)
trial_masks = get_masks(trial_texts, trial_encodings)
# Test function for lengths
for i in range(0,len(train_encodings.input_ids)):
if(len(train_encodings.input_ids[i]) != len(train_labels[i])):
print(i)
for i in range(0,len(trial_encodings.input_ids)):
if(len(trial_encodings.input_ids[i]) != len(trial_masks[i])):
print(i)
# Train Data
truncated_train = np.asarray(train_encodings.input_ids)[:,:250]
truncated_train_labels = np.asarray(train_labels)[:,:250]
truncated_train_masks = np.asarray(train_masks)[:,:250]
# Trial Data
truncated_trial = np.asarray(trial_encodings.input_ids)[:,:250]
truncated_trial_masks = np.asarray(trial_masks)[:,:250]
attention_masks_train = np.asarray(train_encodings.attention_mask)[:,:250]
attention_masks_trial = np.asarray(trial_encodings.attention_mask)[:,:250]
print(np.shape(attention_masks_train))
print(np.shape(attention_masks_trial))
# Train Data
index = 0
print(train_texts[index])
print(toxic_words[index])
print(truncated_train_labels[index,:25])
print(truncated_train_masks[index,:25])
# Trial data
index = 0
print(trial_texts[index])
# print(toxic_words[index])
# print(truncated_train_labels[index,:40])
print(truncated_trial_masks[index,:60])
# Train Data
print(np.shape(truncated_train))
print(np.shape(truncated_train_labels))
print(np.shape(truncated_train_masks))
# Trial Data
print(np.shape(truncated_trial))
# print(np.shape(truncated_train_labels))
print(np.shape(truncated_trial_masks))
###Output
_____no_output_____
###Markdown
Model
###Code
strategy = tf.distribute.TPUStrategy(resolver)
# Bert, electra, roberta, XLM-Roberta Model, XLnet
def toxic_span(input_shape):
#Model
inputs = keras.Input(shape=input_shape, dtype='int32')
# Import model as required
model = TFRobertaModel.from_pretrained('roberta-base')
layer = model.layers[0]
output = layer(inputs)[0]
output = keras.layers.BatchNormalization()(output)
output = keras.layers.Dropout(0.1)(output)
dense = keras.layers.Dense(1, activation="sigmoid")
answer = keras.layers.TimeDistributed(dense)(output)
model = keras.Model(inputs=inputs, outputs=answer, name='toxic_span')
return model
from tensorflow.keras import backend as K
def custom_loss(y_true, y_pred):
bce = tf.keras.losses.BinaryCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
isMask = tf.math.not_equal(y_true, -100)
mask = tf.cast(isMask, dtype=tf.float32)
y_true_mask = tf.math.multiply(mask,tf.cast(y_true, dtype=tf.float32))
y_pred_mask = tf.math.multiply(mask,y_pred)
loss = bce(y_true, y_pred)
loss_masked = bce(y_true_mask, y_pred_mask) * 10
return loss_masked
# Set up epochs and steps
epochs = 4
batch_size = 16
train_data_size = len(truncated_train)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
warmup_steps = int(epochs * train_data_size * 0.1 / batch_size)
# creates an optimizer with learning rate schedule
optimizer = nlp.optimization.create_optimizer(
5e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
with strategy.scope():
model = toxic_span((250,))
optimizer = optimizer
loss_fun = custom_loss
model.compile(optimizer=optimizer, loss=loss_fun)
# model_ = toxic_span((250,), bert_layer)
model.summary()
len(truncated_train)
###Output
_____no_output_____
###Markdown
Custom evaluation metric
###Code
def get_predicted_words(train_prediction, train_texts, truncated_train_masks):
predicted_labels = []
predicted_toxic_words = []
round_pred = np.round(train_prediction)
train_texts = np.asarray(train_texts)
for i in range(0,len(truncated_train_masks)):
# print(i)
pred_label = np.zeros(len(train_texts[i]))
pred_label = round_pred[i][(truncated_train_masks[i,:] == 1)]
pred_label = np.squeeze(pred_label, axis=-1)
predicted_labels.append(pred_label)
pred_toxic_words = []
for j in range(0,len(pred_label)):
if (pred_label[j] == 1):
pred_toxic_words.append(train_texts[i][j])
predicted_toxic_words.append(pred_toxic_words)
return (predicted_labels, predicted_toxic_words)
def get_char_positions(lines_original, predicted_toxic_words):
char_positions = []
for i in range(0,len(lines_original)):
seq_i = []
for toxic_word in list(set(predicted_toxic_words[i])):
temp = [(m.start(),m.end()) for m in re.finditer(re.escape(toxic_word), lines_original[i])]
for start,end in temp:
seq_i.append(np.arange(start,end))
if(len(seq_i) != 0):
seq_i = set(np.concatenate(seq_i, axis=-1))
seq_i = list((seq_i))
seq_i.sort()
char_positions.append(seq_i)
return char_positions
def f1(predictions, gold):
"""
F1 (a.k.a. DICE) operating on two lists of offsets (e.g., character).
>>> assert f1([0, 1, 4, 5], [0, 1, 6]) == 0.5714285714285714
:param predictions: a list of predicted offsets
:param gold: a list of offsets serving as the ground truth
:return: a score between 0 and 1
"""
if len(gold) == 0:
return [1,1,1] if len(predictions)==0 else [0,0,0]
nom = 2*len(set(predictions).intersection(set(gold)))
denom = len(set(predictions))+len(set(gold))
f1 = nom/denom
if len(predictions) == 0:
precision = 0
else:
precision = len(set(predictions).intersection(set(gold)))/len(set(predictions))
recall = len(set(predictions).intersection(set(gold)))/len(set(gold))
return [f1,precision, recall]
class EvaluationMetric(keras.callbacks.Callback):
def __init__(self, truncated_trial, trial_original, trial_texts, truncated_trial_masks, lines_original_trial, attention_masks):
super(EvaluationMetric, self).__init__()
self.truncated_trial = truncated_trial
self.trial_original = trial_original
self.trial_texts = trial_texts
self.truncated_trial_masks = truncated_trial_masks
self.lines_original_trial = lines_original_trial
self.attention_masks = attention_masks
def on_epoch_begin(self, epoch, logs={}):
print("\nTraining...")
def on_epoch_end(self, epoch, logs={}):
print("\nEvaluating...")
trial_prediction = self.model.predict(self.truncated_trial)
predicted_labels, predicted_toxic_words = get_predicted_words(trial_prediction, self.trial_texts, self.truncated_trial_masks)
final = get_char_positions(self.lines_original_trial, predicted_toxic_words)
sum_f1 = 0
precision = 0
recall = 0
for i in range(0,len(final)):
sum_f1 = sum_f1 + f1(final[i], self.trial_original[i])[0]
# print(f1(final[i], self.trial_original[i]))
precision = precision + f1(final[i], self.trial_original[i])[1]
recall = recall + f1(final[i], self.trial_original[i])[2]
print("\nF1 on val set: ",sum_f1/len(final))
print("\nPrecision on val set: ",precision/len(final))
print("\nRecall on val set: ",recall/len(final))
# Comment the evaluation metric while predicting on train set
evaluation_metric = EvaluationMetric(truncated_trial, np.asarray(df_trial["spans"]), trial_texts, truncated_trial_masks, lines_original_trial, attention_masks_trial)
checkpoint = ModelCheckpoint(filepath='/content/roberta.{epoch:03d}.h5',
verbose = 0,
save_weights_only=True,
epoch=1)
# Roberta retrain used for visualisation
history = model.fit(
x = truncated_train,
y = truncated_train_labels,
batch_size=16,
shuffle=True,
callbacks = [evaluation_metric, checkpoint],
epochs=1)
###Output
_____no_output_____
###Markdown
Train Over
The next part is for creating results file. Use test file instead of trial file while loading trial data for test set preidictions.
###Code
# Test results if you imported test file during initialisation.
trial_prediction = model.predict(truncated_trial)
# trial_prediction[3][:20]
np.shape(trial_prediction)
predicted_labels, predicted_toxic_words = get_predicted_words(trial_prediction, trial_texts, truncated_trial_masks)
index = 1
print(trial_texts[index])
print(predicted_labels[index])
print("Predicted: ",predicted_toxic_words[index])
# print("True: ",toxic_words[index])
final = get_char_positions(lines_original_trial, predicted_toxic_words)
index = 11
print(trial_texts[index])
print(lines_original_trial[index])
# print(predicted_labels[index])
print("Predicted: ",predicted_toxic_words[index])
# print("True: ",toxic_words[index])
print("Predicted: ", final[index])
print("True: ", df_trial["spans"][index])
###Output
_____no_output_____
###Markdown
Prediction File
###Code
# make sure that the ids match the ones of the scores
predictions = list(final)
ids = df_train.index.to_list()
# write in a prediction file named "spans-pred.txt"
with open("spans-pred.txt", "w") as out:
for uid, text_scores in zip(ids, predictions):
out.write(f"{str(uid)}\t{str(text_scores)}\n")
! zip -r mpnet_2_high_precision.zip ./spans-pred.*
###Output
_____no_output_____
###Markdown
Analysis
###Code
sum_f1 = 0
precision = 0
recall = 0
for i in range(0,len(final)):
sum_f1 = sum_f1 + f1(final[i], df_trial["spans"][i])[0]
precision = precision + f1(final[i], df_trial["spans"][i])[1]
recall = recall + f1(final[i], df_trial["spans"][i])[2]
print("\nF1 on val set: ",sum_f1/len(final))
print("\nPrecision on val set: ",precision/len(final))
print("\nRecall on val set: ",recall/len(final))
###Output
_____no_output_____
|
tutoriais/brnn_ptb.ipynb
|
###Markdown
**Modelos Gráficos Probabilísticos - Trabalho Prático****Universidade Federal de Minas Gerais | Programa de Pós-Graduação em Ciência da Computação****Nome:** Leandro Augusto Lacerda CamposDezembro de 2019 **1. Introdução** O presente trabalho prático (TP) consiste em implementar, utilizando a linguagem de programação Python 3 e bibliotecas do *framework* Apache MXNet, a abordagem bayesiana para redes neurais recorrentes (RNNs) sugerida por Fortunato et al. [2017]. Nessa abordagem, os autores adicionam incerteza e regularização a RNNs aplicando o método *bayes by backprop* (BBB) formulado por Blundell et al. [2015].Implementararemos, especificamente, a aplicação da abordagem proposta no problema de modelagem de linguagem. Nessa aplicação, Fortunato et al. [2017] utilizam o conjunto de dados de Marcus et al. [1993], denominado *PennTreebank* (PTB), e a arquitetura de rede indicada por Zaremba et al. [2014] para desenvolver uma RNN bayesiana que tem por objetivo predizer a próxima palavra de uma sequência.Por fim, cabe ressaltar que não fazem parte do escopo desse TP:* a implementação do método de adaptação local da aproximação variacional a lotes de dados, denominado *posterior sharpening* e proposto por Fortunato et al. [2017];* a implementação do método de inferência dinâmica proposto por Mikolov et al. [2010]; e* a aplicação da abordagem proposta por Fortunato et al. [2017] para a tarefa de geração de legenda paraimagens. **1.1. Suposições sobre o leitor** Assumimos que você, leitor, conhece os tópicos listados abaixo. Caso precise relembrá-los, incluímos algumas referências úteis:* Redes neurais: preparação dos dados, modelagem e treinamento. Consulte os capítulos [2](http://d2l.ai/chapter_preliminaries/index.html), [3](http://d2l.ai/chapter_linear-networks/index.html) e [4](http://d2l.ai/chapter_multilayer-perceptrons/index.html) do livro de Zhang et al. [2019].* Redes neurais recorrentes: preparação dos dados, arquitetura *long short-term memory* (LSTM), *gradient clipping* e a versão truncada do *backpropgation through time* (BPTT). Consulte os capítulos [8](http://d2l.ai/chapter_recurrent-neural-networks/index.html) e [9](http://d2l.ai/chapter_modern_recurrent-networks/index.html) do livro de Zhang et al. [2019].* Processamento de linguagem natural: *word embedding*. Consulte as quatro primeiras seções do capítulo [14](http://d2l.ai/chapter_natural-language-processing/index.html) do livro de Zhang et al. [2019].* Teoria da informação: entropia cruzada, perplexidade e divergência de Kullback-Leibler. Consulte as seções [17.11](http://d2l.ai/chapter_appendix_math/information-theory.html) e [8.4.4](http://d2l.ai/chapter_recurrent-neural-networks/rnn.htmlperplexity) do livro de Zhang et al. [2019].* Inferência bayesiana. Consulte a seguinte [página](https://en.wikipedia.org/wiki/Bayesian_inference) do Wikipedia.Também supomos que você tem experiência com o Apache MXNet (ou o PyTorch, que é similar) e Python 3. **1.2. Suposições sobre o ambiente de execução** Esse TP foi planejado para ser executado em um ambiente do [Google Colab](https://colab.research.google.com/) com suporte a Python 3 e a unidade de processamento gráfico (GPU). Para verificar ou alterar as configurações desse notebook no Colab, clique no menu 'Edit' e depois em 'Notebook settings'. Vamos assumir que essa GPU é compatível com a versão 10.1 da NVIDIA CUDA e que esse *toolkit* já está instalado no ambiente.O tempo de execução desse TP nesse ambiente é de aproximadamente 02h30min. Esse tempo pode variar dependendo das configurações do ambiente no qual você está conectado. Nas melhores configurações já testadas, o treinamento de cada época do modelo de Zaremba et al. [2014] demora cerca de 35 segundos. Caso você experimente um tempo 2x ou 3x maior do que esse, sugerimos que você se desconecte do atual ambiente de execução (clique no menu 'Runtime' e depois em 'Manage sessions') e então se conecte novamente.Durante a execução desse notebook, você pode se deparar com os seguintes problemas:* Ocorreu um erro de processamento ou de memória na GPU. Nesse caso, recomendamos que você se desconecte do atual ambiente de execução (clique no menu 'Runtime' e depois em 'Manage sessions') e então se conecte novamente.* A conexão com o ambiente de execução foi interrompida por inatividade ou limite de tempo de uso excedido. Nessa situação, tente se conectar novamente clicando no botão 'Connect', situado no canto superior direito da página. As vezes, a execução recomeça de onde parou. Caso contrário, comece tudo novamente.É importante ressaltar que você não precisa executar esse notebook para avaliar os resultados do TP, uma vez que a saída de cada célula está salva na versão publicada. **1.2.1. Problemas com a GPU do Google Colab** Com uma certa frequência, a GPU do ambiente de execução do Colab ao qual você se conecta não consegue executar corretamente esse notebook. Note que existem várias configurações de ambiente com suporte a GPU. Para fazer os últimos testes, foi necessário o uso de uma instância paga da plataforma Google Cloud. Essa instância tinha as seguintes configurações:* Tipo de máquina: n1-standard-8 (8 vCPUs, 30 GB de memória)* GPUs: 1 x NVIDIA Tesla V100* Zona: us-west1-b* Imagem: common-cu101-20191005* Disco de inicialização: Disco permanente SSDPara conectar o Google Colab a uma instância paga do Google Cloud, siga os passos descritos nesse [vídeo](https://www.youtube.com/watch?v=U5HyNzf_ips).Uma outra forma de tentar lidar com o problema da GPU no Colab é a seguinte:* Passo 1: Para treinar o modelo de Zaremba et al. [2014], transforme em comentário (coloque no início de cada linha) o conteúdo da célula que instancia e treina o modelo de Fortunato et al. [2017]. E então execute o notebook.* Passo 2: Desfaça a ação do passo 1 e se desconecte do atual ambiente de execução (clique no menu 'Runtime' e depois em 'Manage sessions').* Passo 3: Para treinar o modelo de Fortunato et al. [2017], transforme em comentário (coloque no início de cada linha) o conteúdo da célula que instancia e treina o modelo de Zaremba et al. [2014]. E então execute o notebook. **1.3. Bibliotecas e configurações globais** Para executar esse TP, precisamos instalar e importar as seguintes bibliotecas.
###Code
!pip install mxnet-cu101==1.6.0b20190915
import collections
import math
import os
import random
import time
import urllib.request
import zipfile
import re
from mxnet import np, npx, nd, context, autograd, gluon, init, util
from mxnet.gluon import nn, rnn
npx.set_np()
###Output
_____no_output_____
###Markdown
A classe `Args` declara as configurações globais que nós utilizaremos para implementar e então treinar o modelo de tamanho médio de Zaremba et al. [2014] e o modelo de Fortunato et al. [2017].
###Code
class Args(object):
# Data settings
archive_file_name = 'ptb.zip'
archive_file_root = './data'
download_root = 'https://github.com/d2l-ai/d2l-en/raw/master/data'
# Model settings
embedding_size = 650
hidden_size = 650
num_layers = 2
tie_weights = False
save = './model.params'
# Training settings
init_scale = 0.05
num_steps = 35
batch_size = 20
random_shift = False
keep_prob = 0.5
lr_start = 1.0
lr_decay = 0.8
clip_norm = 5
num_epochs = 39
high_lr_epochs = 6
# BBB settings
prior_pi = 0.25
prior_sigma1 = np.exp(-1.0)
prior_sigma2 = np.exp(-7.0)
bbb_on_bias = False
# Inference settings
sample_mode = False
args = Args()
###Output
_____no_output_____
###Markdown
**2. Dados de treinamento, validação e teste** Nessa seção do TP, vamos mostrar como obter, pré-processar e carregar em mini-lotes os dados que compõem o *Penn Treebank* (PTB). Esse corpus linquístico é formado por artigos publicados no *Wall Street Journal* (WSJ) e está dividido em subconjuntos de treinamento, validação e teste. A função `read_ptb` é responsável por obter cada um desses subconjuntos.
###Code
def read_ptb(subset='train', root=args.archive_file_root):
assert subset in ('train', 'valid', 'test'), \
"Invalid subset %s; must be one of ['train', 'valid', 'test']"%subset
archive_file_path = os.path.join(root, args.archive_file_name)
if not os.path.isfile(archive_file_path):
if not os.path.isdir(root):
os.makedirs(root)
archive_file_url = os.path.join(args.download_root,
args.archive_file_name)
urllib.request.urlretrieve(archive_file_url, archive_file_path)
with zipfile.ZipFile(archive_file_path, mode='r') as f:
data_file_name = 'ptb/ptb.{}.txt'.format(subset)
raw_text = f.read(data_file_name).decode('utf-8').replace('\n', '<eos>')
return raw_text.split()
train_subset, valid_subset, test_subset = [
read_ptb(subset=subset)
for subset in ['train', 'valid', 'test']
]
'# tokens in train_subset: {}'.format(len(train_subset))
###Output
_____no_output_____
###Markdown
Cada subconjunto é, inicialmente, uma sequência de sequências de palavras. Mas a função `read_ptb` o transforma em uma sequência de palavras. Veja, pela saída da célula acima, que o subconjunto de treinamento é representado, dessa forma, por uma sequência de 929.589 palavras.A classe `Vocab`, definida abaixo, representa o conjunto das palavras que compõem um corpus linguístico e fornece duas bijeções. A primeira delas associa, a cada palavra desse conjunto, um número inteiro não-negativo, chamado índice. E a outra é simplesmente a inversa da anterior.
###Code
class Vocab(object):
def __init__(self, subset):
counter = collections.Counter(subset)
self.token_freqs = sorted(counter.items(), key=lambda x: x[1],
reverse=True)
self.unk, uniq_tokens = 0, ['<unk>']
uniq_tokens += [token for token, freq in self.token_freqs
if token not in uniq_tokens]
self.idx_to_token, self.token_to_idx = [], dict()
for token in uniq_tokens:
self.idx_to_token.append(token)
self.token_to_idx[token] = len(self.idx_to_token) - 1
def __len__(self):
return len(self.idx_to_token)
def __getitem__(self, tokens):
if not isinstance(tokens, (list, tuple)):
return self.token_to_idx.get(tokens, self.unk)
return [self.__getitem__(token) for token in tokens]
def to_tokens(self, indices):
if not isinstance(indices, (list, tuple)):
return self.idx_to_token[indices]
return [self.idx_to_token[index] for index in indices]
###Output
_____no_output_____
###Markdown
Vamos ver quantas palavras existem no vocabulário (ou dicionário) do subconjunto de treinamento.
###Code
vocab = Vocab(train_subset)
'vocab size: {}'.format(len(vocab))
###Output
_____no_output_____
###Markdown
Agora vamos definir como cada subconjunto do corpus, representado por uma sequência de palavras, deve ser dividido em mini-lotes. Na função abaixo, chamada `bptt_batchify`, o parâmetro `batch_size` indica o número de subsequências da sequência `corpus` em cada mini-lote e o parâmetro `num_steps` assinala a quantidade de palavras por subsequência.Seja $(X, Y)$ um dos mini-lotes retornados por `bptt_batchify`. O valor $X_{ij}$ representa a j-ésima palavra da i-ésima subsequência desse mini-lote. E o valor $Y_{ij}$ indica a palavra subsequente àquela representada por $X_{ij}$ na sequência `corpus`.
###Code
def bptt_batchify(corpus, num_steps, batch_size):
num_indices = ((len(corpus) - 1) // batch_size) * batch_size
Xs = np.array(corpus[:num_indices])
Ys = np.array(corpus[1:(num_indices + 1)])
Xs, Ys = Xs.reshape((batch_size, -1)), Ys.reshape((batch_size, -1))
num_batches = Xs.shape[1] // num_steps
for i in range(0, num_batches * num_steps, num_steps):
X = Xs[:, i:(i+num_steps)]
Y = Ys[:, i:(i+num_steps)]
# X.shape = Y.shape = (batch_size, num_steps)
yield X, Y
###Output
_____no_output_____
###Markdown
Verifique, abaixo, que dois mini-lotes adjacentes retornados pela função `bptt_batchify` são também adjacentes na sequência fornecida como entrada. É por isso que o método implementado nessa função é chamado particionamento sequencial (apesar dele não induzir uma partição conforme definição usada em matemática).
###Code
my_seq = list(range(30))
for X, Y in bptt_batchify(my_seq, num_steps=6, batch_size=2):
print('X:', X, '\nY:', Y)
###Output
X: [[ 0. 1. 2. 3. 4. 5.]
[14. 15. 16. 17. 18. 19.]]
Y: [[ 1. 2. 3. 4. 5. 6.]
[15. 16. 17. 18. 19. 20.]]
X: [[ 6. 7. 8. 9. 10. 11.]
[20. 21. 22. 23. 24. 25.]]
Y: [[ 7. 8. 9. 10. 11. 12.]
[21. 22. 23. 24. 25. 26.]]
###Markdown
A classe `SeqDataLoader`, definida abaixo, reúne os métodos de pré-processamento e de iteração, via mini-lotes, sobre os dados de um subconjunto do corpus.
###Code
class SeqDataLoader(object):
def __init__(self, subset, vocab, num_steps=args.num_steps,
batch_size=args.batch_size, batchify_fn=bptt_batchify,
random_shift=False):
corpus = [vocab[token] for token in subset]
shift = random.randint(0, num_steps) if random_shift else 0
self.corpus = corpus[shift:]
self.vocab = vocab
self.num_steps = num_steps
self.batch_size = batch_size
self.num_batches = ((len(corpus) - 1) // batch_size) // num_steps
self.get_iter = lambda: batchify_fn(self.corpus, num_steps, batch_size)
def __iter__(self):
return self.get_iter()
train_iter = SeqDataLoader(train_subset, vocab, random_shift=args.random_shift)
valid_iter = SeqDataLoader(valid_subset, vocab)
test_iter = SeqDataLoader(test_subset, vocab)
###Output
_____no_output_____
###Markdown
Pela saída da célula abaixo, podemos ver que cada mini-lote do subconjunto de treinamento contém 20 subsequências de 35 palavras cada.
###Code
for X, Y in train_iter:
print('X.shape:', X.shape, '\nY.shape:', Y.shape)
break
###Output
X.shape: (20, 35)
Y.shape: (20, 35)
###Markdown
**3. Modelo de Zaremba et al. [2014]** O modelo proposto por Zaremba et al. [2014] consiste em uma camada *embedding* na entrada, um núcleo recorrente de arquitetura LSTM e uma camada densa na saída. Nesse TP, implementaremos apenas a configuração de tamanho médio desse modelo.A camada *embedding* associa, a cada palavra do dicionário de um corpus linguístico, um vetor do espaço coordenado real de dimensão igual a `embedding_size`. Nesse modelo, a camada tem como entrada uma matriz de formato `(num_steps, batch_size)` e retorna um tensor de formato `(num_steps, batch_size, embedding_size)`.
###Code
encoder = nn.Embedding(input_dim=len(vocab),
output_dim=args.embedding_size)
encoder.initialize(init.Normal(sigma=args.init_scale), force_reinit=True)
encoded = encoder(X.T)
'encoded.shape: {}'.format(encoded.shape)
###Output
_____no_output_____
###Markdown
O núcleo recorrente desse modelo é formado por `num_layers` camadas LSTM. O espaço-estado de cada uma delas tem dimensão igual a `hidden_size`. Esse núcleo, que modela a dependência sequencial entre as palavras, retorna dois tensores que representam, respectivamente, a saída, de formato `(num_steps, batch_size, hidden_size)`, e o par estado-memória, de formato `(2, num_layers, batch_size, hidden_size)`. Como entrada, ele recebe a saída da camada anterior e o valor inicial do par estado-memória.
###Code
lstm_layer = rnn.LSTM(hidden_size=args.hidden_size,
num_layers=args.num_layers,
dropout=1-args.keep_prob,
input_size=args.embedding_size)
lstm_layer.initialize(init.Normal(sigma=args.init_scale), force_reinit=True)
state = lstm_layer.begin_state(batch_size=args.batch_size)
'# states: {}, state[_].shape: {}'.format(len(state), state[0].shape)
output, state = lstm_layer(encoded, state)
'output.shape: {}'.format(output.shape)
###Output
_____no_output_____
###Markdown
Na saída desse modelo, temos uma camada densa. Ela recebe a saída do núcleo recorrente, após ter o seu formato alterado para `(num_steps * batch_size, hidden_size)`, e então retorna uma matriz de formato `(num_steps * batch_size, vocab_size)`. A aplicação linha por linha da função `softmax` nessa matriz nos dá `num_steps * batch_size` distribuições de probabilidade sobre as palavras do dicionário.A classe `PTBModel` escapsula esse modelo, implementa a sua operação de *forward* e também define a função que retorna o valor inicial do par estado-memória. Como a operação de *backward* é inferida automaticamente pela biblioteca `autograd` do Apache MXNet, nós não precisamos nos preocupar com ela.
###Code
class PTBModel(nn.HybridBlock):
def __init__(self, vocab_size, embedding_size=args.embedding_size,
hidden_size=args.hidden_size, num_layers=args.num_layers,
dropout=1-args.keep_prob, tie_weights=args.tie_weights,
**kwargs):
super(PTBModel, self).__init__(**kwargs)
with self.name_scope():
self.drop = nn.Dropout(dropout)
self.encoder = nn.Embedding(input_dim=vocab_size,
output_dim=embedding_size)
self.lstm = rnn.LSTM(hidden_size=hidden_size,
num_layers=num_layers,
dropout=dropout,
input_size=embedding_size)
if tie_weights:
self.decoder = nn.Dense(vocab_size, in_units=hidden_size,
params=self.encoder.params)
else:
self.decoder = nn.Dense(vocab_size, in_units=hidden_size)
self.hidden_size = hidden_size
def hybrid_forward(self, F, inputs, state):
# inputs.shape = (batch_size, num_steps)
# encoded.shape = (num_steps, batch_size, embedding_size)
encoded = self.drop(self.encoder(inputs.T))
# output.shape = (num_steps, batch_size, hidden_size)
# state[_].shape = (num_layers, batch_size, hidden_size)
output, state = self.lstm(encoded, state)
output = self.drop(output)
# decoded.shape = (num_steps * batch_size, vocab_size)
decoded = self.decoder(output.reshape((-1, self.hidden_size)))
return decoded, state
def begin_state(self, *args, **kwargs):
return self.lstm.begin_state(*args, **kwargs)
###Output
_____no_output_____
###Markdown
RNNs são propensas a problemas de extinção e explosão de gradientes. O primeiro deles pode ser remediado com o uso da arquitetura LSTM. E o segundo pode ser controlado com a utilização da função `grad_clipping`, definida abaixo, e do método BPTT truncado, que está implícito na forma como nós dividimos os subconjuntos do corpus em mini-lotes e no treinamento do modelo.
###Code
def grad_clipping(model, clip_norm):
params = [p.data() for p in model.collect_params().values()]
norm = math.sqrt(sum((p.grad ** 2).sum() for p in params))
if norm > clip_norm:
for param in params:
param.grad[:] *= clip_norm / norm
###Output
_____no_output_____
###Markdown
O treinamento desse modelo está implementado nas funções `train`, `train_epoch` e `eval`, definidas nas três células a seguir. A principal estatística que elas retornam ou imprimem é a [perplexidade por palavra](https://en.wikipedia.org/wiki/Perplexity).
###Code
def eval(model, data_iter, loss, ctx):
num_steps = data_iter.num_steps
batch_size = data_iter.batch_size
state = model.begin_state(batch_size=batch_size, ctx=ctx)
loss_sum = 0
steps_sum = 0
for X, Y in data_iter:
X = X.as_in_context(ctx)
# Y.shape = (batch_size, num_steps)
y = Y.T.reshape((-1,))
# y.shape = (num_steps * batch_size)
y = y.as_in_context(ctx)
yhat, state = model(X, state)
L = loss(yhat, y)
# Sum over the sequence
sequence_neg_log_prob = L.reshape((batch_size, num_steps)).sum(axis=1)
# Average over the batch
data_loss = sequence_neg_log_prob.mean()
loss_sum += data_loss
steps_sum += num_steps
return loss_sum / steps_sum
def train_epoch(model, train_iter, loss, clip_norm, trainer, ctx):
start_time = time.time()
num_steps = train_iter.num_steps
batch_size = train_iter.batch_size
state = model.begin_state(batch_size=batch_size, ctx=ctx)
loss_sum = 0
steps_sum = 0
for X, Y in train_iter:
for s in state: s.detach()
X = X.as_in_context(ctx)
# Y.shape = (batch_size, num_steps)
y = Y.T.reshape((-1,))
# y.shape = (num_steps * batch_size)
y = y.as_in_context(ctx)
with autograd.record():
yhat, state = model(X, state)
L = loss(yhat, y)
# Sum over the sequence
sequence_neg_log_prob = L.reshape((batch_size, num_steps)).sum(axis=1)
# Average over the batch
data_loss = sequence_neg_log_prob.mean()
data_loss.backward()
grad_clipping(model, clip_norm)
trainer.step(batch_size=1)
loss_sum += data_loss
steps_sum += num_steps
return loss_sum / steps_sum, time.time() - start_time
def train(model, train_iter, valid_iter, test_iter, init_scale=args.init_scale,
lr=args.lr_start, lr_decay=args.lr_decay, num_epochs=args.num_epochs,
high_lr_epochs=args.high_lr_epochs, clip_norm=args.clip_norm,
ctx=context.cpu()):
loss = gluon.loss.SoftmaxCrossEntropyLoss()
model.initialize(ctx=ctx, force_reinit=True,
init=init.Normal(sigma=init_scale))
trainer = gluon.Trainer(model.collect_params(), 'sgd',
{'learning_rate': lr})
model.hybridize()
# Train and check the progress
for epoch in range(num_epochs):
if epoch >= high_lr_epochs:
lr = lr * lr_decay
trainer._init_optimizer('sgd', {'learning_rate': lr})
train_loss, speed = train_epoch(model, train_iter, loss, clip_norm,
trainer, ctx)
print('[Epoch %d] time cost %.2fs, train loss %.2f, train ppl %.2f'%(
epoch, speed, train_loss, math.exp(train_loss)))
valid_loss = eval(model, valid_iter, loss, ctx)
print('valid loss %.2f, valid ppl %.2f'%(valid_loss, math.exp(valid_loss)))
test_loss = eval(model, test_iter, loss, ctx)
print('test loss %.2f, test ppl %.2f'%(test_loss, math.exp(test_loss)))
model.hybridize(active=False)
def try_gpu(device_id=0):
if context.num_gpus() >= device_id + 1:
return context.gpu(device_id)
else:
return context.cpu()
ctx = try_gpu()
print(ctx)
###Output
gpu(0)
###Markdown
A seguir, vamos instanciar o modelo e executar o seu treinamento. No artigo de Zaremba et al. [2014], a configuração de tamanho médio desse modelo obteve perplexidades por palavra de 86.2 e 82.7 nos subconjuntos de validação e teste, respectivamente. Uma vez que os pesos do modelo são inicializados de forma aleatória, os resultados podem variar cada vez que executamos o treinamento. O melhor resultado que obtivemos nesse notebook foi de 86.04 e 81.87 nos mesmos subconjuntos, nessa ordem.
###Code
model = PTBModel(len(vocab))
train(model, train_iter, valid_iter, test_iter, ctx=ctx)
###Output
[Epoch 0] time cost 22.38s, train loss 5.91, train ppl 367.70
valid loss 5.36, valid ppl 211.76
test loss 5.33, test ppl 207.05
[Epoch 1] time cost 22.08s, train loss 5.19, train ppl 179.45
valid loss 5.05, valid ppl 155.49
test loss 5.03, test ppl 152.29
[Epoch 2] time cost 21.87s, train loss 4.94, train ppl 140.39
valid loss 4.88, valid ppl 131.68
test loss 4.86, test ppl 128.88
[Epoch 3] time cost 21.99s, train loss 4.78, train ppl 119.35
valid loss 4.78, valid ppl 119.08
test loss 4.76, test ppl 117.02
[Epoch 4] time cost 21.86s, train loss 4.66, train ppl 105.65
valid loss 4.70, valid ppl 109.80
test loss 4.68, test ppl 107.53
[Epoch 5] time cost 21.97s, train loss 4.57, train ppl 96.10
valid loss 4.65, valid ppl 104.07
test loss 4.62, test ppl 101.82
[Epoch 6] time cost 21.95s, train loss 4.45, train ppl 85.39
valid loss 4.59, valid ppl 98.32
test loss 4.56, test ppl 95.85
[Epoch 7] time cost 21.88s, train loss 4.35, train ppl 77.31
valid loss 4.55, valid ppl 94.68
test loss 4.52, test ppl 92.13
[Epoch 8] time cost 21.95s, train loss 4.26, train ppl 70.96
valid loss 4.52, valid ppl 92.06
test loss 4.49, test ppl 89.31
[Epoch 9] time cost 21.95s, train loss 4.20, train ppl 66.38
valid loss 4.51, valid ppl 90.67
test loss 4.48, test ppl 87.88
[Epoch 10] time cost 21.86s, train loss 4.14, train ppl 62.52
valid loss 4.49, valid ppl 89.54
test loss 4.46, test ppl 86.66
[Epoch 11] time cost 22.02s, train loss 4.09, train ppl 59.63
valid loss 4.49, valid ppl 88.73
test loss 4.45, test ppl 85.68
[Epoch 12] time cost 21.97s, train loss 4.05, train ppl 57.41
valid loss 4.48, valid ppl 88.24
test loss 4.44, test ppl 84.99
[Epoch 13] time cost 21.89s, train loss 4.02, train ppl 55.49
valid loss 4.48, valid ppl 87.88
test loss 4.44, test ppl 84.57
[Epoch 14] time cost 21.89s, train loss 3.99, train ppl 54.06
valid loss 4.47, valid ppl 87.65
test loss 4.43, test ppl 84.11
[Epoch 15] time cost 22.10s, train loss 3.97, train ppl 53.00
valid loss 4.47, valid ppl 87.38
test loss 4.43, test ppl 83.72
[Epoch 16] time cost 22.03s, train loss 3.95, train ppl 52.07
valid loss 4.47, valid ppl 87.24
test loss 4.43, test ppl 83.54
[Epoch 17] time cost 21.93s, train loss 3.94, train ppl 51.35
valid loss 4.47, valid ppl 86.94
test loss 4.42, test ppl 83.18
[Epoch 18] time cost 21.95s, train loss 3.93, train ppl 50.71
valid loss 4.47, valid ppl 86.93
test loss 4.42, test ppl 83.14
[Epoch 19] time cost 22.17s, train loss 3.92, train ppl 50.31
valid loss 4.46, valid ppl 86.83
test loss 4.42, test ppl 82.93
[Epoch 20] time cost 21.90s, train loss 3.91, train ppl 49.92
valid loss 4.46, valid ppl 86.67
test loss 4.42, test ppl 82.75
[Epoch 21] time cost 21.92s, train loss 3.90, train ppl 49.54
valid loss 4.46, valid ppl 86.70
test loss 4.42, test ppl 82.74
[Epoch 22] time cost 22.00s, train loss 3.90, train ppl 49.48
valid loss 4.46, valid ppl 86.53
test loss 4.41, test ppl 82.54
[Epoch 23] time cost 21.94s, train loss 3.90, train ppl 49.18
valid loss 4.46, valid ppl 86.51
test loss 4.41, test ppl 82.51
[Epoch 24] time cost 21.94s, train loss 3.89, train ppl 49.03
valid loss 4.46, valid ppl 86.47
test loss 4.41, test ppl 82.45
[Epoch 25] time cost 21.94s, train loss 3.89, train ppl 48.88
valid loss 4.46, valid ppl 86.35
test loss 4.41, test ppl 82.32
[Epoch 26] time cost 21.99s, train loss 3.89, train ppl 48.73
valid loss 4.46, valid ppl 86.39
test loss 4.41, test ppl 82.34
[Epoch 27] time cost 21.91s, train loss 3.89, train ppl 48.75
valid loss 4.46, valid ppl 86.30
test loss 4.41, test ppl 82.25
[Epoch 28] time cost 21.94s, train loss 3.88, train ppl 48.62
valid loss 4.46, valid ppl 86.29
test loss 4.41, test ppl 82.23
[Epoch 29] time cost 22.11s, train loss 3.88, train ppl 48.59
valid loss 4.46, valid ppl 86.25
test loss 4.41, test ppl 82.18
[Epoch 30] time cost 22.05s, train loss 3.88, train ppl 48.57
valid loss 4.46, valid ppl 86.25
test loss 4.41, test ppl 82.17
[Epoch 31] time cost 21.99s, train loss 3.88, train ppl 48.45
valid loss 4.46, valid ppl 86.25
test loss 4.41, test ppl 82.15
[Epoch 32] time cost 22.05s, train loss 3.88, train ppl 48.45
valid loss 4.46, valid ppl 86.25
test loss 4.41, test ppl 82.16
[Epoch 33] time cost 21.97s, train loss 3.88, train ppl 48.43
valid loss 4.46, valid ppl 86.23
test loss 4.41, test ppl 82.14
[Epoch 34] time cost 21.93s, train loss 3.88, train ppl 48.47
valid loss 4.46, valid ppl 86.21
test loss 4.41, test ppl 82.12
[Epoch 35] time cost 21.98s, train loss 3.88, train ppl 48.43
valid loss 4.46, valid ppl 86.20
test loss 4.41, test ppl 82.11
[Epoch 36] time cost 21.91s, train loss 3.88, train ppl 48.41
valid loss 4.46, valid ppl 86.19
test loss 4.41, test ppl 82.10
[Epoch 37] time cost 21.90s, train loss 3.88, train ppl 48.41
valid loss 4.46, valid ppl 86.19
test loss 4.41, test ppl 82.10
[Epoch 38] time cost 21.90s, train loss 3.88, train ppl 48.46
valid loss 4.46, valid ppl 86.18
test loss 4.41, test ppl 82.09
###Markdown
Um fato curioso é que o treinamento desse modelo pelos seus autores (que na época trabalhavam no Google Brain), no ano de 2014, consumiu meio dia de processamento, mesmo usando uma GPU. Hoje, utilizando um ambiente gratuito, executamos o mesmo treinamento em cerca de 23 minutos. **4. Abordagem bayesiana para redes neurais** A abordagem bayesiana para redes neurais tem por finalidade representar a incerteza sistemática nessa classe de modelos. Ela consiste em atribuir ao vetor de pesos $\mathbf{w}$ da rede, que é uma constante numérica desconhecida, uma distribuição de probabilidade para expressar nossa expectativa, com base na informação disponível, de que dado valor para $\mathbf{w}$ é o verdadeiro. E ela nos ajuda a responder às seguintes perguntas: tendo em vista o tamanho e a diversidade do conjunto de treinamento, qual é a incerteza relacionada à estrutura e aos pesos de uma rede neural treinada nesse conjunto e qual é a confiança dessa rede ao fazer cada previsão?Nessa abordagem, o treinamento de uma rede neural passa a ter como objetivo calcular a distribuição a posteriori de $\mathbf{w}$ dado o conjunto de treinamento $\mathcal{D}$, isto é, a distribuição $p(\mathbf{w} \mid \mathcal{D})$. E a distribuição preditiva de $\mathbf{y} \mid \mathbf{x}$ fica então definida por$$\mathbb{E}_{p(\mathbf{w} \mid \mathcal{D})}[p(\mathbf{y} \mid \mathbf{x}, \mathbf{w})].$$Como é inviável obter a posteriori $p(\mathbf{w} \mid \mathcal{D})$ em redes neurais de qualquer tamanho prático, Blundell et al. [2015] sugerem considerar uma família $\mathcal{Q} = {\{q(\mathbf{w} \mid \theta) : \theta \in \Theta\}}$ de distribuição paramétrica e então encontrar os parâmetros $\theta$ da distribuição $q(\mathbf{w} \mid \theta)$ que minimizam a sua divergência de Kullback-Leibler ($\text{KL}$) com a posteriori $p(\mathbf{w} \mid \mathcal{D})$:$$\begin{aligned}\theta^{*} & = \underset{\theta}{\operatorname{arg min}} \text{KL}(q(\mathbf{w} \mid \theta) \Vert p(\mathbf{w} \mid \mathcal{D})) \\ & = \underset{\theta}{\operatorname{arg min}} \int q(\mathbf{w} \mid \theta) \log \frac{q(\mathbf{w} \mid \theta)} {p(\mathbf{w} \mid \mathcal{D})}d\mathbf{w} \\ & = \underset{\theta}{\operatorname{arg min}} \int q(\mathbf{w} \mid \theta) \left[\log q(\mathbf{w} \mid \theta) - \frac{\log p(\mathcal{D} \mid \mathbf{w})p(\mathbf{w})}{p(\mathcal{D})}\right]d\mathbf{w} \\ & = \underset{\theta}{\operatorname{arg min}} \int q(\mathbf{w} \mid \theta) \left[\log \frac{q(\mathbf{w} \mid \theta)}{p(\mathbf{w})} - \log p(\mathcal{D} \mid \mathbf{w}) + \log p(\mathcal{D})\right]d\mathbf{w} \\ & = \underset{\theta}{\operatorname{arg min}} \text{KL}(q(\mathbf{w} \mid \theta) \Vert p(\mathbf{w})) - \mathbb{E}_{q(\mathbf{w} \mid \theta)}[\log p(\mathcal{D} \mid \mathbf{w})] + \log p(\mathcal{D}) \\ & = \underset{\theta}{\operatorname{arg min}}\text{KL}(q(\mathbf{w} \mid \theta) \Vert p(\mathbf{w})) - \mathbb{E}_{q(\mathbf{w} \mid \theta)}[\log p(\mathcal{D} \mid \mathbf{w})], \\\end{aligned}$$onde o termo $\text{KL}(q(\mathbf{w} \mid \theta) \Vert p(\mathbf{w}))$ é chamado custo da complexidade e atua como um regularizador, e o termo $- \mathbb{E}_{q(\mathbf{w} \mid \theta)}[\log p(\mathcal{D} \mid \mathbf{w})]$ recebe o nome de custo da log-verossimilhança. Note que o primeiro termo não depende da saída da rede neural. Se assumirmos que os valores de $\mathbf{w}$ são coletivamente independentes, como é o caso, então podemos calcular o custo da complexidade camada por camada.O modelo de Fortunato et al. [2017] fixa que a aproximação variacional $q(\mathbf{w} \mid \theta)$ é uma distribuição normal multivariada diagonal com parâmetros $\theta$ e que a distribuição a priori $p(\mathbf{w})$ é uma mistura de normais multivariadas diagonais, centradas na origem, com parâmetros `priori_pi`, `priori_sigma1` e `priori_sigma2`. As classes `CustomNormal` e `CustomScaleMixture` definem essas duas distribuições. Note que usamos `rho` ao invés de `sigma`, respectivamente $\rho$ e $\sigma$, para parametrizar uma distribuição normal multivariada. Nesse caso, definimos $\sigma = \log(1 + \exp(\rho))$, donde sempre vale $\sigma > 0$.
###Code
class CustomNormal(object):
def __init__(self, F, mu, rho, shape=None):
super(CustomNormal).__init__()
self.mu = mu
self.rho = rho
if F is nd:
self.normal = lambda : F.np.random.normal(size=rho.shape, ctx=rho.ctx)
else:
self.normal = lambda : F.np.random.normal(size=shape)
self.log1p = F.np.log1p
self.exp = F.np.exp
self.log = F.np.log
@property
def sigma(self):
return self.log1p(self.exp(self.rho))
def sample(self):
epsilon = self.normal()
return self.mu + self.sigma * epsilon
def _squared_difference(self, x, y):
return (x - y) ** 2
def log_prob(self, x):
return (-0.5 * self.log(2. * math.pi) - self.log(self.sigma)
-0.5 * self._squared_difference(x / self.sigma,
self.mu / self.sigma))
class CustomScaleMixture(object):
def __init__(self, F, pi, sigma1, sigma2, ctx=None, dtype=None):
super(CustomScaleMixture).__init__()
if F is nd:
to_array = lambda v : F.array(v, ctx=ctx, dtype=dtype).as_np_ndarray()
else:
to_array = lambda v : v
self.log = F.np.log
self.exp = F.np.exp
self.max = F.np.max
self.sum = F.np.sum
self.squeeze = F.np.squeeze
self.stack = F.np.stack
self.mu, self.pi, self.sigma1, self.sigma2 = (
to_array(v) for v in (0.0, pi, sigma1, sigma2))
rho1 = self.log(self.exp(self.sigma1) - 1.0)
rho2 = self.log(self.exp(self.sigma2) - 1.0)
self.n1 = CustomNormal(F, self.mu, rho1)
self.n2 = CustomNormal(F, self.mu, rho2)
# This function is more numerically stable than log(sum(exp(x))).
def _log_sum_exp(self, x, axis, keepdims=False):
max_x = self.max(x, axis=axis, keepdims=True)
x = self.log(self.sum(self.exp(x - max_x), axis=axis, keepdims=True))
x = x + max_x
if not keepdims:
x = self.squeeze(x, axis=axis)
return x
def log_prob(self, x):
mix1 = self.sum(self.n1.log_prob(x), -1) + self.log(self.pi)
mix2 = self.sum(self.n2.log_prob(x), -1) + self.log(1.0 - self.pi)
prior_mix = self.stack([mix1, mix2])
lse_mix = self._log_sum_exp(prior_mix, [0])
return self.sum(lse_mix)
###Output
_____no_output_____
###Markdown
Para inicializar os parâmetros relacionados a $\rho$ nas camadas bayesianas, precisamos definir a classe `Uniform` e as funções `non_lstm_rho_initializer` e `lstm_rho_initializer` conforme especificado por Fortunato et al. [2017].
###Code
class Uniform(init.Initializer):
def __init__(self, low=-0.07, high=0.07):
super(Uniform, self).__init__(low=low, high=high)
self.low = low
self.high = high
def _init_weight(self, _, arr):
np.random.uniform(self.low, self.high, arr.shape, out=arr)
def non_lstm_rho_initializer(prior_pi, prior_sigma1, prior_sigma2):
prior_sigma = np.sqrt(prior_pi * (prior_sigma1 ** 2) +
(1 - prior_pi) * (prior_sigma2 ** 2))
minval = np.log(np.exp(prior_sigma / 2.0) - 1.0)
maxval = np.log(np.exp(prior_sigma / 1.0) - 1.0)
return Uniform(minval, maxval)
def lstm_rho_initializer(prior_pi, prior_sigma1, prior_sigma2):
prior_sigma = np.sqrt(prior_pi * (prior_sigma1 ** 2) +
(1 - prior_pi) * (prior_sigma2 ** 2))
minval = np.log(np.exp(prior_sigma / 4.0) - 1.0)
maxval = np.log(np.exp(prior_sigma / 2.0) - 1.0)
return Uniform(minval, maxval)
###Output
_____no_output_____
###Markdown
**4.1 Camadas bayesianas** Para transformar o modelo de Zaremba et al. [2014] no modelo de Fortunato et al. [2017], precisamos definir uma versão bayesiana das camadas *embedding*, LSTM e densa. Para tanto, seguiremos os passos descritos nesse [artigo](https://mxnet.incubator.apache.org/api/python/docs/tutorials/extend/custom_layer.html) e utilizaremos, como ponto de partida, o código-fonte da versão original dessas camadas.Nessa nova versão, os parâmetros de cada camada não são mais os seus respectivos pesos, mas sim os parâmetros da aproximação variacional da distribuição a posteriori desses pesos. Veja isso no construtor `__init__` de cada uma das três classes definidas a seguir. Porém, a principal diferença dessa versão em relação à original diz respeito à operação de *forward*. O novo procedimento pode ser resumido da seguinte forma:* **Passo 1**: Obtenha os pesos da camada. No modo treino ou no modo simulação, amostre os pesos da aproximação variacional. Caso contrário, tome o vetor de médias dessa distribuição.* **Passo 2**: Execute a operação de *forward* tal como na versão original da camada, usando os pesos obtidos no passo 1.* **Passo 3**: Calcule o custo da complexidade da camada, isto é, uma estimativa da divergência $\text{KL}$ da distribuição a priori dos pesos em relação à aproximação variacional, usando os pesos obtidos no passo 1.* **Passo 4**: Salve o custo da complexidade calculado no passo 3.* **Passo 5**: Retorne o resultado da operação executada no passo 2.Essa operação de *forward* está implementada nos métodos `forward`, `hybrid_forward`[, `_forward_kernel`] e `_get_total_kl_cost` das classes definidas nessa seção.Segue, na célula abaixo, a definição da classe `BayesianEmbedding`.
###Code
class BayesianEmbedding(nn.HybridBlock):
def __init__(self, input_dim, output_dim, prior_pi, prior_sigma1,
prior_sigma2, dtype='float32', weight_mu_initializer=None,
weight_rho_initializer=None, sample_mode=False, **kwargs):
super(BayesianEmbedding, self).__init__(**kwargs)
self._input_dim = input_dim
self._output_dim = output_dim
self._prior_pi = prior_pi
self._prior_sigma1 = prior_sigma1
self._prior_sigma2 = prior_sigma2
self._sample_mode = sample_mode
self._total_kl_cost = None
self._kwargs = {'input_dim': input_dim, 'output_dim': output_dim,
'dtype': dtype, 'sparse_grad': False}
with self.name_scope():
self.weight_mu = self.params.get('weight_mu',
shape=(input_dim, output_dim),
init=weight_mu_initializer,
dtype=dtype, allow_deferred_init=True)
self.weight_rho = self.params.get('weight_rho',
shape=(input_dim, output_dim),
init=weight_rho_initializer,
dtype=dtype, allow_deferred_init=True)
def __repr__(self):
s = '{name}({input_dim} -> {output_dim}, {dtype})'
return s.format(name=self.__class__.__name__, **self._kwargs)
def forward(self, x, *args):
emb, total_kl_cost = super(BayesianEmbedding, self).forward(x, *args)
self._total_kl_cost = total_kl_cost
return emb
def hybrid_forward(self, F, x, weight_mu, weight_rho):
weight_dist = CustomNormal(F, weight_mu, weight_rho, self.weight_rho.shape)
if autograd.is_training() or self._sample_mode:
weight = weight_dist.sample()
else:
weight = weight_dist.mu
emb = F.npx.embedding(x, weight, name='fwd', **self._kwargs)
# We could save computation here.
total_kl_cost = self._get_total_kl_cost(F, weight_dist, weight)
return emb, total_kl_cost
def kl_cost(self, scale=1.0):
assert self._total_kl_cost is not None, \
'You must execute a forward operation before getting the KL cost'
return self._total_kl_cost * scale
def _get_total_kl_cost(self, F, weight_dist, weight):
if F is nd:
ctx = self.weight_mu.list_ctx()[0]
dtype = self.weight_mu.dtype
prior = CustomScaleMixture(F, self._prior_pi, self._prior_sigma1,
self._prior_sigma2, ctx, dtype)
else:
prior = CustomScaleMixture(F, self._prior_pi, self._prior_sigma1,
self._prior_sigma2)
log_prior = prior.log_prob(weight)
log_variational_posterior = weight_dist.log_prob(weight).sum()
return log_variational_posterior - log_prior
@property
def sample_mode(self):
return self._sample_mode
@sample_mode.setter
def sample_mode(self, value):
self._sample_mode = value
###Output
_____no_output_____
###Markdown
Passemos, agora, para a definição da classe `BayesianLSTM`.
###Code
class BayesianLSTM(nn.HybridBlock):
def __init__(self, input_size, hidden_size, prior_pi, prior_sigma1,
prior_sigma2, num_layers=1, bidirectional=False, dtype='float32',
i2h_weight_mu_initializer=None, i2h_weight_rho_initializer=None,
h2h_weight_mu_initializer=None, h2h_weight_rho_initializer=None,
i2h_bias_mu_initializer='zeros', i2h_bias_rho_initializer=None,
h2h_bias_mu_initializer='zeros', h2h_bias_rho_initializer=None,
sample_mode=False, **kwargs):
super(BayesianLSTM, self).__init__(**kwargs)
self._input_size = input_size
self._hidden_size = hidden_size
self._prior_pi = prior_pi
self._prior_sigma1 = prior_sigma1
self._prior_sigma2 = prior_sigma2
self._num_layers = num_layers
self._dir = 2 if bidirectional else 1
self._dtype = dtype
self._i2h_weight_mu_initializer = i2h_weight_mu_initializer
self._i2h_weight_rho_initializer = i2h_weight_rho_initializer
self._h2h_weight_mu_initializer = h2h_weight_mu_initializer
self._h2h_weight_rho_initializer = h2h_weight_rho_initializer
self._i2h_bias_mu_initializer = i2h_bias_mu_initializer
self._i2h_bias_rho_initializer = i2h_bias_rho_initializer
self._h2h_bias_mu_initializer = h2h_bias_mu_initializer
self._h2h_bias_rho_initializer = h2h_bias_rho_initializer
self._sample_mode = sample_mode
self._params_shape = None
self._total_kl_cost = None
self._gates = 4 # number of gates in a LSTM layer
self._layout = 'TNC' # T, N and C stand for sequence length, batch size, ...
# and feature dimensions respectively.
ng, ni, nh = self._gates, input_size, hidden_size
for i in range(num_layers):
for j in ['l', 'r'][:self._dir]:
self._register_param('{}{}_i2h_weight_mu'.format(j, i),
shape=(ng*nh, ni),
init=i2h_weight_mu_initializer, dtype=dtype)
self._register_param('{}{}_i2h_weight_rho'.format(j, i),
shape=(ng*nh, ni),
init=i2h_weight_rho_initializer, dtype=dtype)
self._register_param('{}{}_h2h_weight_mu'.format(j, i),
shape=(ng*nh, nh),
init=h2h_weight_mu_initializer, dtype=dtype)
self._register_param('{}{}_h2h_weight_rho'.format(j, i),
shape=(ng*nh, nh),
init=h2h_weight_rho_initializer, dtype=dtype)
self._register_param('{}{}_i2h_bias_mu'.format(j, i),
shape=(ng*nh,),
init=i2h_bias_mu_initializer, dtype=dtype)
self._register_param('{}{}_i2h_bias_rho'.format(j, i),
shape=(ng*nh,),
init=i2h_bias_rho_initializer, dtype=dtype)
self._register_param('{}{}_h2h_bias_mu'.format(j, i),
shape=(ng*nh,),
init=h2h_bias_mu_initializer, dtype=dtype)
self._register_param('{}{}_h2h_bias_rho'.format(j, i),
shape=(ng*nh,),
init=h2h_bias_rho_initializer, dtype=dtype)
ni = nh * self._dir
def _register_param(self, name, shape, init, dtype):
p = self.params.get(name, shape=shape, init=init,
allow_deferred_init=True, dtype=dtype)
setattr(self, name, p)
return p
def __repr__(self):
s = '{name}({mapping}, {_layout}'
if self._num_layers != 1:
s += ', num_layers={_num_layers}'
if self._dir == 2:
s += ', bidirectional'
s += ')'
shape = self.l0_i2h_weight_mu.shape
mapping = '{0} -> {1}'.format(shape[1] if shape[1] else None,
shape[0] // self._gates)
return s.format(name=self.__class__.__name__, mapping=mapping,
**self.__dict__)
def _collect_params_with_prefix(self, prefix=''):
if prefix:
prefix += '.'
pattern = re.compile(r'(l|r)(\d)_(i2h|h2h)_(weight|bias)_(mu|rho)\Z')
def convert_key(m, bidirectional):
d, l, g, t, p = [m.group(i) for i in range(1, 6)]
if bidirectional:
return '_unfused.{}.{}_cell.{}_{}_{}'.format(l, d, g, t, p)
else:
return '_unfused.{}.{}_{}_{}'.format(l, g, t, p)
bidirectional = any(pattern.match(k).group(1) == 'r'
for k in self._reg_params)
ret = {prefix + convert_key(pattern.match(key), bidirectional) : val
for key, val in self._reg_params.items()}
for name, child in self._children.items():
ret.update(child._collect_params_with_prefix(prefix + name))
return ret
def state_info(self, batch_size=0):
return [{'shape': (self._num_layers * self._dir, batch_size,
self._hidden_size),
'__layout__': 'LNC', 'dtype': self._dtype},
{'shape': (self._num_layers * self._dir, batch_size,
self._hidden_size),
'__layout__': 'LNC', 'dtype': self._dtype}]
def cast(self, dtype):
super(BayesianLSTM, self).cast(dtype)
self._dtype = dtype
def begin_state(self, batch_size=0, func=nd.zeros, **kwargs):
states = []
for i, info in enumerate(self.state_info(batch_size)):
if info is not None:
info.update(kwargs)
else:
info = kwargs
state = func(name='%sh0_%d' % (self.prefix, i), **info).as_np_ndarray()
states.append(state)
return states
def __call__(self, inputs, states=None, **kwargs):
self.skip_states = states is None
if states is None:
if isinstance(inputs, nd.NDArray):
batch_size = inputs.shape[1] # TNC layout
states = self.begin_state(batch_size, ctx=inputs.context,
dtype=inputs.dtype)
else:
states = self.begin_state(0, func=symbol.zeros)
if isinstance(states, gluon.tensor_types):
states = [states]
return super(BayesianLSTM, self).__call__(inputs, states, **kwargs)
def forward(self, x, *args):
out = super(BayesianLSTM, self).forward(x, *args)
# out = (outputs, states, total_kl_cost)
self._total_kl_cost = out[2]
return out[0] if self.skip_states else (out[0], out[1])
def hybrid_forward(self, F, inputs, states, **kwargs):
if F is nd:
batch_size = inputs.shape[1] # TNC layout
if F is nd:
for state, info in zip(states, self.state_info(batch_size)):
if state.shape != info['shape']:
raise ValueError(
"Invalid recurrent state shape. Expecting %s, got %s."%(
str(info['shape']), str(state.shape)))
return self._forward_kernel(F, inputs, states, **kwargs)
def _forward_kernel(self, F, inputs, states, **kwargs):
params_mu = (kwargs['{}{}_{}_{}'.format(d, l, g, t)].reshape(-1)
for t in ['weight_mu', 'bias_mu']
for l in range(self._num_layers)
for d in ['l', 'r'][:self._dir]
for g in ['i2h', 'h2h'])
params_rho = (kwargs['{}{}_{}_{}'.format(d, l, g, t)].reshape(-1)
for t in ['weight_rho', 'bias_rho']
for l in range(self._num_layers)
for d in ['l', 'r'][:self._dir]
for g in ['i2h', 'h2h'])
params_mu = F.np._internal.rnn_param_concat(*params_mu, dim=0)
params_rho = F.np._internal.rnn_param_concat(*params_rho, dim=0)
if self._params_shape is None and F is nd:
self._params_shape = params_rho.shape
params_dist = CustomNormal(F, params_mu, params_rho, self._params_shape)
if autograd.is_training() or self._sample_mode:
params = params_dist.sample()
else:
params = params_dist.mu
rnn_args = states
rnn = F.npx.rnn(inputs, params, *rnn_args, use_sequence_length=False,
state_size=self._hidden_size, projection_size=None,
num_layers=self._num_layers, bidirectional=self._dir == 2,
p=0, state_outputs=True, mode='lstm',
lstm_state_clip_min=None,
lstm_state_clip_max=None,
lstm_state_clip_nan=False)
outputs, states = rnn[0], [rnn[1], rnn[2]]
# We could save computation here.
total_kl_cost = self._get_total_kl_cost(F, params_dist, params)
return outputs, states, total_kl_cost
def kl_cost(self, scale=1.0):
assert self._total_kl_cost is not None, \
'You must execute a forward operation before getting the KL cost'
return self._total_kl_cost * scale
def _get_total_kl_cost(self, F, params_dist, params):
if F is nd:
ctx = self.l0_i2h_weight_mu.list_ctx()[0]
dtype = self.l0_i2h_weight_mu.dtype
prior = CustomScaleMixture(F, self._prior_pi, self._prior_sigma1,
self._prior_sigma2, ctx, dtype)
else:
prior = CustomScaleMixture(F, self._prior_pi, self._prior_sigma1,
self._prior_sigma2)
log_prior = prior.log_prob(params)
log_variational_posterior = params_dist.log_prob(params).sum()
return log_variational_posterior - log_prior
@property
def sample_mode(self):
return self._sample_mode
@sample_mode.setter
def sample_mode(self, value):
self._sample_mode = value
###Output
_____no_output_____
###Markdown
Por fim, vamos definir a classe `BayesianDense`.
###Code
class BayesianDense(nn.HybridBlock):
def __init__(self, units, in_units, prior_pi, prior_sigma1, prior_sigma2,
activation=None, use_bias=True, flatten=True, dtype='float32',
weight_mu_initializer=None, weight_rho_initializer=None,
bias_mu_initializer='zeros', bias_rho_initializer=None,
sample_mode=False, bbb_on_bias=True, **kwargs):
super(BayesianDense, self).__init__(**kwargs)
self._units = units
self._in_units = in_units
self._prior_pi = prior_pi
self._prior_sigma1 = prior_sigma1
self._prior_sigma2 = prior_sigma2
self._flatten = flatten
self._sample_mode = sample_mode
self._total_kl_cost = None
with self.name_scope():
self.weight_mu = self.params.get('weight_mu',
shape=(units, in_units),
init=weight_mu_initializer,
dtype=dtype, allow_deferred_init=True)
self.weight_rho = self.params.get('weight_rho',
shape=(units, in_units),
init=weight_rho_initializer,
dtype=dtype, allow_deferred_init=True)
if use_bias:
self.bias_mu = self.params.get('bias_mu', shape=(units,),
init=bias_mu_initializer,
dtype=dtype, allow_deferred_init=True)
if bbb_on_bias:
self.bias_rho = self.params.get('bias_rho', shape=(units,),
init=bias_rho_initializer,
dtype=dtype, allow_deferred_init=True)
else:
self.bias_rho = None
else:
self.bias_mu = None
self.bias_rho = None
if activation is not None:
self.act = nn.Activation(activation, prefix=activation+'_')
else:
self.act = None
def __repr__(self):
s = '{name}({layout}, {act})'
shape = self.weight_mu.shape
return s.format(name=self.__class__.__name__,
act=self.act if self.act else 'linear',
layout='{0} -> {1}'.format(shape[1] if shape[1] else None,
shape[0]))
def forward(self, x, *args):
act, total_kl_cost = super(BayesianDense, self).forward(x, *args)
self._total_kl_cost = total_kl_cost
return act
def hybrid_forward(self, F, x, weight_mu, weight_rho,
bias_mu=None, bias_rho=None):
weight_dist = CustomNormal(F, weight_mu, weight_rho, self.weight_rho.shape)
if autograd.is_training() or self._sample_mode:
weight = weight_dist.sample()
else:
weight = weight_dist.mu
if bias_mu is not None:
if bias_rho is not None:
bias_dist = CustomNormal(F, bias_mu, bias_rho, self.bias_rho.shape)
if autograd.is_training() or self._sample_mode:
bias = bias_dist.sample()
else:
bias = bias_dist.mu
else:
bias = bias_mu
else:
bias = None
act = F.npx.fully_connected(x, weight, bias, no_bias=bias_mu is None,
num_hidden=self._units, flatten=self._flatten,
name='fwd')
if self.act is not None:
act = self.act(act)
# We could save computation here.
if bias_rho is not None:
total_kl_cost = self._get_total_kl_cost(F, weight_dist, weight,
bias_dist, bias)
else:
total_kl_cost = self._get_total_kl_cost(F, weight_dist, weight)
return act, total_kl_cost
def kl_cost(self, scale=1.0):
assert self._total_kl_cost is not None, \
'You must execute a forward operation before getting the KL cost'
return self._total_kl_cost * scale
def _get_total_kl_cost(self, F, weight_dist, weight,
bias_dist=None, bias=None):
if F is nd:
ctx = self.weight_mu.list_ctx()[0]
dtype = self.weight_mu.dtype
prior = CustomScaleMixture(F, self._prior_pi, self._prior_sigma1,
self._prior_sigma2, ctx, dtype)
else:
prior = CustomScaleMixture(F, self._prior_pi, self._prior_sigma1,
self._prior_sigma2)
if bias_dist is not None:
log_prior = prior.log_prob(weight) + prior.log_prob(bias)
log_variational_posterior = weight_dist.log_prob(weight).sum() + \
bias_dist.log_prob(bias).sum()
else:
log_prior = prior.log_prob(weight)
log_variational_posterior = weight_dist.log_prob(weight).sum()
return log_variational_posterior - log_prior
@property
def sample_mode(self):
return self._sample_mode
@sample_mode.setter
def sample_mode(self, value):
self._sample_mode = value
###Output
_____no_output_____
###Markdown
**5. Modelo de Fortunato et al. [2017]** O modelo de Fortunato et al. [2017] se diferencia do anterior nos seguintes aspectos:* Não há dropout.* As camadas originais foram substituídas pelas suas respectivas versões bayesianas, definidas anteriormente.* Esse modelo tem um método chamado `kl_cost` que retorna a soma do custo da complexidade de cada camada que compõe esse modelo.Seguindo o que foi proposto pelos autores, não atribuímos distribuição de probabilidade sobre o bias da camada densa, conforme valor corrente da variável `bbb_on_bias`.
###Code
class BayesianPTBModel(nn.HybridBlock):
def __init__(self, vocab_size, embedding_size=args.embedding_size,
hidden_size=args.hidden_size, num_layers=args.num_layers,
prior_pi=args.prior_pi, prior_sigma1=args.prior_sigma1,
prior_sigma2=args.prior_sigma2, tie_weights=args.tie_weights,
sample_mode=args.sample_mode, bbb_on_bias=args.bbb_on_bias,
**kwargs):
super(BayesianPTBModel, self).__init__(**kwargs)
self._sample_mode = sample_mode
self._total_kl_cost = None
non_lstm_rho_init = non_lstm_rho_initializer(prior_pi, prior_sigma1,
prior_sigma2)
lstm_rho_init = lstm_rho_initializer(prior_pi, prior_sigma1, prior_sigma2)
with self.name_scope():
self.encoder = BayesianEmbedding(input_dim=vocab_size,
output_dim=embedding_size,
prior_pi=prior_pi,
prior_sigma1=prior_sigma1,
prior_sigma2=prior_sigma2,
weight_rho_initializer=non_lstm_rho_init,
sample_mode=sample_mode)
self.lstm = BayesianLSTM(input_size=embedding_size,
hidden_size=hidden_size,
prior_pi=prior_pi,
prior_sigma1=prior_sigma1,
prior_sigma2=prior_sigma2,
num_layers=num_layers,
i2h_weight_rho_initializer=lstm_rho_init,
h2h_weight_rho_initializer=lstm_rho_init,
i2h_bias_rho_initializer=lstm_rho_init,
h2h_bias_rho_initializer=lstm_rho_init,
sample_mode=sample_mode)
if tie_weights:
self.decoder = BayesianDense(units=vocab_size,
in_units=hidden_size,
prior_pi=prior_pi,
prior_sigma1=prior_sigma1,
prior_sigma2=prior_sigma2,
weight_rho_initializer=non_lstm_rho_init,
bias_rho_initializer=non_lstm_rho_init,
sample_mode=sample_mode,
bbb_on_bias=bbb_on_bias,
params=self.encoder.params)
else:
self.decoder = BayesianDense(units=vocab_size,
in_units=hidden_size,
prior_pi=prior_pi,
prior_sigma1=prior_sigma1,
prior_sigma2=prior_sigma2,
weight_rho_initializer=non_lstm_rho_init,
bias_rho_initializer=non_lstm_rho_init,
sample_mode=sample_mode,
bbb_on_bias=bbb_on_bias)
self.hidden_size = hidden_size
def forward(self, x, *args):
out = super(BayesianPTBModel, self).forward(x, *args)
# out = (outputs, states, total_kl_cost)
self._total_kl_cost = out[2]
return out[0], out[1]
def hybrid_forward(self, F, inputs, state):
# inputs.shape = (batch_size, num_steps)
# encoded.shape = (num_steps, batch_size, embedding_size)
encoded = self.encoder(inputs.T)
# output.shape = (num_steps, batch_size, hidden_size)
# state[_].shape = (num_layers, batch_size, hidden_size)
output, state = self.lstm(encoded, state)
# decoded.shape = (num_steps * batch_size, vocab_size)
decoded = self.decoder(output.reshape((-1, self.hidden_size)))
total_kl_cost = self.encoder.kl_cost() + self.lstm.kl_cost() + \
self.decoder.kl_cost()
return decoded, state, total_kl_cost
def begin_state(self, *args, **kwargs):
return self.lstm.begin_state(*args, **kwargs)
def kl_cost(self, scale=1.0):
assert self._total_kl_cost is not None, \
'You must execute a forward operation before getting the KL cost'
return self._total_kl_cost * scale
@property
def sample_mode(self):
return self._sample_mode
@sample_mode.setter
def sample_mode(self, value):
self._sample_mode = value
self.encoder.sample_mode = value
self.lstm.sample_mode = value
self.decoder.sample_mode = value
###Output
_____no_output_____
###Markdown
As configurações de treinamento desse modelo são ligeiramente diferentes das usadas no modelo anterior. Veja, abaixo, o que precisa ser alterado.
###Code
args.lr_decay = 0.9
args.num_epochs = 70
args.high_lr_epochs = 20
###Output
_____no_output_____
###Markdown
No treinamento desse modelo, vamos estimar a divergência $\text{KL}$ que queremos minimizar utilizando simulação Monte Carlo com apenas uma amostra gerada da aproximação variacional. E para validação e teste, os pesos utilizados na operação de *forward* são o vetor de médias dessa distribuição convergente. Fortunato et al. [2017] utilizaram esse mesmo procedimento para evitar maior custo computacional em comparação com o custo do modelo de Zaremba et al. [2014].Como o custo da complexidade da rede não depende dos dados, nós precisamos colocá-lo na mesma escala do custo da log-verossimilhança em cada mini-lote. De acordo com Blundell et al. [2015], existem várias formas de se fazer esse ajuste. A forma usada por Fortunato et al. [2017] pode ser considerada a mais simples e direta: basta dividir `total_kl_cost` por `batch_size * num_batches`. Desse modo, o custo da complexidade é distribuído uniformimente entre todas as subsequências de todos os mini-lotes do conjunto de treinamento.
###Code
def train_epoch_bbb(model, train_iter, loss, clip_norm, trainer, ctx):
start_time = time.time()
num_steps = train_iter.num_steps
batch_size = train_iter.batch_size
num_batches = train_iter.num_batches
num_dataset_elements = batch_size * num_batches
state = model.begin_state(batch_size=batch_size, ctx=ctx)
loss_sum = 0
steps_sum = 0
for X, Y in train_iter:
for s in state: s.detach()
X = X.as_in_context(ctx)
# Y.shape = (batch_size, num_steps)
y = Y.T.reshape((-1,))
# y.shape = (num_steps * batch_size)
y = y.as_in_context(ctx)
with autograd.record():
yhat, state = model(X, state)
L = loss(yhat, y)
# Sum over the sequence
sequence_neg_log_prob = L.reshape((batch_size, num_steps)).sum(axis=1)
# Average over the batch
data_loss = sequence_neg_log_prob.mean()
total_kl_cost = model.kl_cost()
scaled_kl_cost = total_kl_cost / num_dataset_elements
# KL divergence
total_loss = scaled_kl_cost + data_loss
total_loss.backward()
grad_clipping(model, clip_norm)
trainer.step(batch_size=1)
loss_sum += data_loss
steps_sum += num_steps
return loss_sum / steps_sum, time.time() - start_time
def train_bbb(model, train_iter, valid_iter, test_iter,
init_scale=args.init_scale, lr=args.lr_start,
lr_decay=args.lr_decay, num_epochs=args.num_epochs,
high_lr_epochs=args.high_lr_epochs, clip_norm=args.clip_norm,
ctx=context.cpu()):
loss = gluon.loss.SoftmaxCrossEntropyLoss()
model.initialize(ctx=ctx, force_reinit=True,
init=init.Normal(sigma=init_scale))
trainer = gluon.Trainer(model.collect_params(), 'sgd',
{'learning_rate': lr})
# Train and check the progress
for epoch in range(num_epochs):
if epoch >= high_lr_epochs:
lr = lr * lr_decay
trainer._init_optimizer('sgd', {'learning_rate': lr})
train_loss, speed = train_epoch_bbb(model, train_iter, loss, clip_norm,
trainer, ctx)
print('[Epoch %d] time cost %.2fs, train loss %.2f, train ppl %.2f'%(
epoch, speed, train_loss, math.exp(train_loss)))
valid_loss = eval(model, valid_iter, loss, ctx)
print('valid loss %.2f, valid ppl %.2f'%(valid_loss, math.exp(valid_loss)))
test_loss = eval(model, test_iter, loss, ctx)
print('test loss %.2f, test ppl %.2f'%(test_loss, math.exp(test_loss)))
###Output
_____no_output_____
###Markdown
Para finalizar, vamos instanciar esse modelo e executar o seu treinamento. A perplexidade por palavra reportada no artigo de Fortunato et al. [2017] foi de 78.8 no subconjunto de validação e de 75.5 no de teste.Uma vez que os pesos do modelo são inicializados de forma aleatória, os resultados podem variar cada vez que executamos o treinamento. O melhor resultado que obtivemos nesse notebook foi de 79.37 e 75.98 nos mesmos subconjuntos, nessa ordem. Executando mais vezes esse procedimento, provavelmente serão obtidos resultados melhores. Mas nós não fizemos mais tentativas por questão de tempo (e de custo).
###Code
model = BayesianPTBModel(len(vocab))
train_bbb(model, train_iter, valid_iter, test_iter, ctx=ctx)
###Output
[Epoch 0] time cost 62.14s, train loss 6.29, train ppl 538.67
valid loss 5.65, valid ppl 284.94
test loss 5.62, test ppl 277.05
[Epoch 1] time cost 61.87s, train loss 5.43, train ppl 227.63
valid loss 5.31, valid ppl 201.71
test loss 5.28, test ppl 196.38
[Epoch 2] time cost 62.25s, train loss 5.12, train ppl 168.16
valid loss 5.07, valid ppl 159.37
test loss 5.04, test ppl 155.23
[Epoch 3] time cost 62.61s, train loss 4.95, train ppl 140.64
valid loss 4.96, valid ppl 142.04
test loss 4.93, test ppl 138.00
[Epoch 4] time cost 62.45s, train loss 4.83, train ppl 124.66
valid loss 4.87, valid ppl 130.52
test loss 4.84, test ppl 127.08
[Epoch 5] time cost 62.24s, train loss 4.73, train ppl 113.61
valid loss 4.80, valid ppl 121.94
test loss 4.77, test ppl 118.45
[Epoch 6] time cost 62.75s, train loss 4.66, train ppl 105.86
valid loss 4.76, valid ppl 117.33
test loss 4.74, test ppl 114.60
[Epoch 7] time cost 62.68s, train loss 4.61, train ppl 100.15
valid loss 4.73, valid ppl 113.07
test loss 4.70, test ppl 110.47
[Epoch 8] time cost 62.57s, train loss 4.56, train ppl 95.60
valid loss 4.70, valid ppl 110.23
test loss 4.68, test ppl 107.52
[Epoch 9] time cost 63.06s, train loss 4.52, train ppl 91.92
valid loss 4.67, valid ppl 107.10
test loss 4.65, test ppl 104.20
[Epoch 10] time cost 62.31s, train loss 4.49, train ppl 89.17
valid loss 4.64, valid ppl 104.02
test loss 4.62, test ppl 101.35
[Epoch 11] time cost 62.07s, train loss 4.46, train ppl 86.71
valid loss 4.63, valid ppl 102.62
test loss 4.61, test ppl 100.09
[Epoch 12] time cost 62.51s, train loss 4.44, train ppl 84.82
valid loss 4.63, valid ppl 102.48
test loss 4.59, test ppl 98.95
[Epoch 13] time cost 62.47s, train loss 4.42, train ppl 83.20
valid loss 4.61, valid ppl 100.49
test loss 4.58, test ppl 97.74
[Epoch 14] time cost 62.64s, train loss 4.41, train ppl 82.04
valid loss 4.60, valid ppl 99.64
test loss 4.57, test ppl 96.56
[Epoch 15] time cost 62.30s, train loss 4.39, train ppl 80.90
valid loss 4.59, valid ppl 98.26
test loss 4.56, test ppl 95.49
[Epoch 16] time cost 62.94s, train loss 4.38, train ppl 80.17
valid loss 4.58, valid ppl 97.42
test loss 4.55, test ppl 94.55
[Epoch 17] time cost 62.19s, train loss 4.37, train ppl 79.41
valid loss 4.58, valid ppl 97.04
test loss 4.54, test ppl 93.74
[Epoch 18] time cost 61.95s, train loss 4.37, train ppl 78.84
valid loss 4.56, valid ppl 95.37
test loss 4.53, test ppl 92.36
[Epoch 19] time cost 62.77s, train loss 4.36, train ppl 78.54
valid loss 4.56, valid ppl 95.83
test loss 4.53, test ppl 92.56
[Epoch 20] time cost 62.28s, train loss 4.34, train ppl 76.60
valid loss 4.55, valid ppl 94.24
test loss 4.51, test ppl 91.15
[Epoch 21] time cost 63.12s, train loss 4.32, train ppl 74.97
valid loss 4.52, valid ppl 91.62
test loss 4.49, test ppl 89.04
[Epoch 22] time cost 62.39s, train loss 4.29, train ppl 73.25
valid loss 4.51, valid ppl 90.56
test loss 4.47, test ppl 87.78
[Epoch 23] time cost 62.08s, train loss 4.27, train ppl 71.87
valid loss 4.49, valid ppl 89.18
test loss 4.46, test ppl 86.78
[Epoch 24] time cost 62.31s, train loss 4.26, train ppl 70.55
valid loss 4.48, valid ppl 88.40
test loss 4.45, test ppl 85.38
[Epoch 25] time cost 62.54s, train loss 4.24, train ppl 69.63
valid loss 4.47, valid ppl 87.03
test loss 4.44, test ppl 84.49
[Epoch 26] time cost 62.76s, train loss 4.23, train ppl 68.57
valid loss 4.46, valid ppl 86.32
test loss 4.43, test ppl 83.52
[Epoch 27] time cost 61.95s, train loss 4.22, train ppl 67.70
valid loss 4.45, valid ppl 85.46
test loss 4.41, test ppl 82.67
[Epoch 28] time cost 62.06s, train loss 4.20, train ppl 66.86
valid loss 4.44, valid ppl 85.04
test loss 4.41, test ppl 81.98
[Epoch 29] time cost 62.37s, train loss 4.19, train ppl 66.10
valid loss 4.44, valid ppl 84.64
test loss 4.40, test ppl 81.83
[Epoch 30] time cost 62.59s, train loss 4.18, train ppl 65.56
valid loss 4.43, valid ppl 83.82
test loss 4.40, test ppl 81.16
[Epoch 31] time cost 62.58s, train loss 4.17, train ppl 64.84
valid loss 4.42, valid ppl 83.44
test loss 4.39, test ppl 80.66
[Epoch 32] time cost 62.26s, train loss 4.17, train ppl 64.43
valid loss 4.42, valid ppl 83.03
test loss 4.38, test ppl 80.08
[Epoch 33] time cost 62.72s, train loss 4.16, train ppl 63.93
valid loss 4.41, valid ppl 82.64
test loss 4.38, test ppl 79.81
[Epoch 34] time cost 62.31s, train loss 4.15, train ppl 63.55
valid loss 4.41, valid ppl 82.36
test loss 4.37, test ppl 79.35
[Epoch 35] time cost 62.73s, train loss 4.15, train ppl 63.13
valid loss 4.41, valid ppl 82.14
test loss 4.37, test ppl 79.22
[Epoch 36] time cost 62.23s, train loss 4.14, train ppl 62.75
valid loss 4.40, valid ppl 81.62
test loss 4.37, test ppl 78.79
[Epoch 37] time cost 62.27s, train loss 4.13, train ppl 62.39
valid loss 4.40, valid ppl 81.51
test loss 4.37, test ppl 78.66
[Epoch 38] time cost 62.44s, train loss 4.13, train ppl 62.20
valid loss 4.40, valid ppl 81.20
test loss 4.36, test ppl 78.21
[Epoch 39] time cost 62.66s, train loss 4.13, train ppl 61.99
valid loss 4.40, valid ppl 81.14
test loss 4.36, test ppl 78.17
[Epoch 40] time cost 63.19s, train loss 4.12, train ppl 61.73
valid loss 4.39, valid ppl 81.01
test loss 4.36, test ppl 77.96
[Epoch 41] time cost 62.34s, train loss 4.12, train ppl 61.50
valid loss 4.39, valid ppl 80.90
test loss 4.36, test ppl 77.91
[Epoch 42] time cost 62.57s, train loss 4.12, train ppl 61.35
valid loss 4.39, valid ppl 80.80
test loss 4.35, test ppl 77.77
[Epoch 43] time cost 62.48s, train loss 4.12, train ppl 61.29
valid loss 4.39, valid ppl 80.65
test loss 4.35, test ppl 77.58
[Epoch 44] time cost 62.50s, train loss 4.11, train ppl 61.08
valid loss 4.39, valid ppl 80.58
test loss 4.35, test ppl 77.54
[Epoch 45] time cost 62.82s, train loss 4.11, train ppl 60.84
valid loss 4.39, valid ppl 80.46
test loss 4.35, test ppl 77.39
[Epoch 46] time cost 62.38s, train loss 4.11, train ppl 60.71
valid loss 4.39, valid ppl 80.35
test loss 4.35, test ppl 77.31
[Epoch 47] time cost 62.25s, train loss 4.10, train ppl 60.64
valid loss 4.39, valid ppl 80.36
test loss 4.35, test ppl 77.30
[Epoch 48] time cost 62.07s, train loss 4.10, train ppl 60.64
valid loss 4.38, valid ppl 80.17
test loss 4.35, test ppl 77.09
[Epoch 49] time cost 62.57s, train loss 4.10, train ppl 60.51
valid loss 4.38, valid ppl 80.10
test loss 4.34, test ppl 77.06
[Epoch 50] time cost 62.67s, train loss 4.10, train ppl 60.38
valid loss 4.38, valid ppl 80.11
test loss 4.34, test ppl 76.99
[Epoch 51] time cost 62.46s, train loss 4.10, train ppl 60.34
valid loss 4.38, valid ppl 80.03
test loss 4.34, test ppl 76.94
[Epoch 52] time cost 62.23s, train loss 4.10, train ppl 60.19
valid loss 4.38, valid ppl 80.01
test loss 4.34, test ppl 76.90
[Epoch 53] time cost 62.24s, train loss 4.10, train ppl 60.12
valid loss 4.38, valid ppl 80.01
test loss 4.34, test ppl 76.85
[Epoch 54] time cost 62.36s, train loss 4.10, train ppl 60.18
valid loss 4.38, valid ppl 79.91
test loss 4.34, test ppl 76.76
[Epoch 55] time cost 62.26s, train loss 4.10, train ppl 60.06
valid loss 4.38, valid ppl 79.85
test loss 4.34, test ppl 76.68
[Epoch 56] time cost 62.21s, train loss 4.09, train ppl 60.03
valid loss 4.38, valid ppl 79.85
test loss 4.34, test ppl 76.69
[Epoch 57] time cost 62.35s, train loss 4.09, train ppl 59.96
valid loss 4.38, valid ppl 79.85
test loss 4.34, test ppl 76.70
[Epoch 58] time cost 62.49s, train loss 4.09, train ppl 59.98
valid loss 4.38, valid ppl 79.83
test loss 4.34, test ppl 76.65
[Epoch 59] time cost 62.92s, train loss 4.09, train ppl 59.82
valid loss 4.38, valid ppl 79.80
test loss 4.34, test ppl 76.62
[Epoch 60] time cost 62.32s, train loss 4.09, train ppl 59.87
valid loss 4.38, valid ppl 79.76
test loss 4.34, test ppl 76.60
[Epoch 61] time cost 62.24s, train loss 4.09, train ppl 59.88
valid loss 4.38, valid ppl 79.76
test loss 4.34, test ppl 76.60
[Epoch 62] time cost 61.99s, train loss 4.09, train ppl 59.82
valid loss 4.38, valid ppl 79.73
test loss 4.34, test ppl 76.56
[Epoch 63] time cost 62.71s, train loss 4.09, train ppl 59.75
valid loss 4.38, valid ppl 79.67
test loss 4.34, test ppl 76.50
[Epoch 64] time cost 62.45s, train loss 4.09, train ppl 59.80
valid loss 4.38, valid ppl 79.65
test loss 4.34, test ppl 76.47
[Epoch 65] time cost 62.77s, train loss 4.09, train ppl 59.81
valid loss 4.38, valid ppl 79.67
test loss 4.34, test ppl 76.51
[Epoch 66] time cost 62.60s, train loss 4.09, train ppl 59.74
valid loss 4.38, valid ppl 79.67
test loss 4.34, test ppl 76.48
[Epoch 67] time cost 62.15s, train loss 4.09, train ppl 59.65
valid loss 4.38, valid ppl 79.67
test loss 4.34, test ppl 76.47
[Epoch 68] time cost 62.38s, train loss 4.09, train ppl 59.75
valid loss 4.38, valid ppl 79.67
test loss 4.34, test ppl 76.48
[Epoch 69] time cost 62.61s, train loss 4.09, train ppl 59.64
valid loss 4.38, valid ppl 79.66
test loss 4.34, test ppl 76.46
|
ImageProcessing/5-face_classification.ipynb
|
###Markdown
dataset
###Code
dataset = fetch_olivetti_faces(shuffle=False)
X = dataset.data
print(X.shape[0]) # number of samples
show_dataset(X, N=64)
y = dataset.target # ラベル
print(y)
###Output
_____no_output_____
###Markdown
学習セットとテストセットの分割
###Code
ss = StratifiedShuffleSplit(n_splits=1, # 分割を1個生成
train_size=0.5, # 学習は半分
test_size=0.5) # テストも半分
train_index, test_index = next(ss.split(X, y))
X_train, X_test = X[train_index], X[test_index] # 学習データ,テストデータ
y_train, y_test = y[train_index], y[test_index] # 学習データのラベル,テストデータのラベル
###Output
_____no_output_____
###Markdown
kNN
###Code
k_vals = [1, 2, 3, 10]
clfs = {}
for k in k_vals:
clf = kNN(n_neighbors=k)
clf.fit(X_train, y_train)
clfs[k] = clf
print(k, 'training accuracy', clf.score(X_train, y_train))
print(k, 'test accuracy', clf.score(X_test, y_test))
@interact(sample=(0, len(y)-1, 1),
k=(1, 3, 1)
)
def g(sample=0, k=1):
imshow(X[sample].reshape(64,64), vmin=0, vmax=1)
clf = clfs[k]
y_pred = clf.predict(X[sample, np.newaxis])[0]
istrain = 'train' if sample in train_index else 'test'
plt.axis('off')
plt.title('{2}: true {0} predict {1}'.format(y[sample], y_pred, istrain))
###Output
_____no_output_____
###Markdown
SVM
###Code
kernels = ['linear', 'poly', 'rbf']
clfs = {}
for kernel in kernels:
clf = SVC(kernel=kernel)
clf.fit(X_train, y_train)
clfs[kernel] = clf
print(kernel, 'training accuracy', clf.score(X_train, y_train))
print(kernel, 'test accuracy', clf.score(X_test, y_test))
@interact(sample=(0, len(y)-1, 1),
kernel=RadioButtons(options=kernels)
)
def g(sample=0, kernel='linear'):
imshow(X[sample].reshape(64,64), vmin=0, vmax=1)
clf = clfs[kernel]
y_pred = clf.predict(X[sample, np.newaxis])[0]
istrain = 'train' if sample in train_index else 'test'
plt.axis('off')
plt.title('{2}: true {0} predict {1}'.format(y[sample], y_pred, istrain))
###Output
_____no_output_____
###Markdown
Random Forest
###Code
n_vals = [10, 100, 500, 1000]
clfs = {}
for n in n_vals:
clf = RandomForest(n_estimators=n)
clf.fit(X_train, y_train)
clfs[n] = clf
print(n, 'training accuracy', clf.score(X_train, y_train))
print(n, 'test accuracy', clf.score(X_test, y_test))
@interact(sample=(0, len(y)-1, 1),
n=RadioButtons(options=n_vals)
)
def g(sample=0, n=100):
imshow(X[sample].reshape(64,64), vmin=0, vmax=1)
clf = clfs[n]
y_pred = clf.predict(X[sample, np.newaxis])[0]
istrain = 'train' if sample in train_index else 'test'
plt.axis('off')
plt.title('{2}: true {0} predict {1}'.format(y[sample], y_pred, istrain))
###Output
_____no_output_____
###Markdown
AdaBoost
###Code
n_vals = [10, 100, 200]
clfs = {}
for n in n_vals:
# clf = AdaBoost(n_estimators=n)
clf = AdaBoost(n_estimators=n, learning_rate=0.01)
clf.fit(X_train, y_train)
clfs[n] = clf
print(n, 'training accuracy', clf.score(X_train, y_train))
print(n, 'test accuracy', clf.score(X_test, y_test))
@interact(sample=(0, len(y)-1, 1),
n=RadioButtons(options=n_vals)
)
def g(sample=0, n=100):
imshow(X[sample].reshape(64,64), vmin=0, vmax=1)
clf = clfs[n]
y_pred = clf.predict(X[sample, np.newaxis])[0]
istrain = 'train' if sample in train_index else 'test'
plt.axis('off')
plt.title('{2}: true {0} predict {1}'.format(y[sample], y_pred, istrain))
###Output
_____no_output_____
|
AssetManagement/export_table.ipynb
|
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
polys = fromFT.geometry()
centroid = polys.centroid()
lng, lat = centroid.getInfo()['coordinates']
# print("lng = {}, lat = {}".format(lng, lat))
# Map.setCenter(lng, lat, 10)
# Map.addLayer(fromFT)
count = fromFT.size().getInfo()
Map.setCenter(lng, lat, 10)
for i in range(2, 2 + count):
fc = fromFT.filter(ee.Filter.eq('system:index', str(i)))
Map.addLayer(fc)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
polys = fromFT.geometry()
centroid = polys.centroid()
lng, lat = centroid.getInfo()['coordinates']
# print("lng = {}, lat = {}".format(lng, lat))
# Map.setCenter(lng, lat, 10)
# Map.addLayer(fromFT)
count = fromFT.size().getInfo()
Map.setCenter(lng, lat, 10)
for i in range(2, 2 + count):
fc = fromFT.filter(ee.Filter.eq('system:index', str(i)))
Map.addLayer(fc)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
polys = fromFT.geometry()
centroid = polys.centroid()
lng, lat = centroid.getInfo()['coordinates']
# print("lng = {}, lat = {}".format(lng, lat))
# Map.setCenter(lng, lat, 10)
# Map.addLayer(fromFT)
count = fromFT.size().getInfo()
Map.setCenter(lng, lat, 10)
for i in range(2, 2 + count):
fc = fromFT.filter(ee.Filter.eq('system:index', str(i)))
Map.addLayer(fc)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
polys = fromFT.geometry()
centroid = polys.centroid()
lng, lat = centroid.getInfo()['coordinates']
# print("lng = {}, lat = {}".format(lng, lat))
# Map.setCenter(lng, lat, 10)
# Map.addLayer(fromFT)
count = fromFT.size().getInfo()
Map.setCenter(lng, lat, 10)
for i in range(2, 2 + count):
fc = fromFT.filter(ee.Filter.eq('system:index', str(i)))
Map.addLayer(fc)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
polys = fromFT.geometry()
centroid = polys.centroid()
lng, lat = centroid.getInfo()['coordinates']
# print("lng = {}, lat = {}".format(lng, lat))
# Map.setCenter(lng, lat, 10)
# Map.addLayer(fromFT)
count = fromFT.size().getInfo()
Map.setCenter(lng, lat, 10)
for i in range(2, 2 + count):
fc = fromFT.filter(ee.Filter.eq('system:index', str(i)))
Map.addLayer(fc)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
polys = fromFT.geometry()
centroid = polys.centroid()
lng, lat = centroid.getInfo()['coordinates']
# print("lng = {}, lat = {}".format(lng, lat))
# Map.setCenter(lng, lat, 10)
# Map.addLayer(fromFT)
count = fromFT.size().getInfo()
view_state = pdk.ViewState(longitude=lng, latitude=lat, zoom=10)
for i in range(2, 2 + count):
fc = fromFT.filter(ee.Filter.eq('system:index', str(i)))
ee_layers.append(EarthEngineLayer(ee_object=fc))
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
polys = fromFT.geometry()
centroid = polys.centroid()
lng, lat = centroid.getInfo()['coordinates']
# print("lng = {}, lat = {}".format(lng, lat))
# Map.setCenter(lng, lat, 10)
# Map.addLayer(fromFT)
count = fromFT.size().getInfo()
Map.setCenter(lng, lat, 10)
for i in range(2, 2 + count):
fc = fromFT.filter(ee.Filter.eq('system:index', str(i)))
Map.addLayer(fc)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
polys = fromFT.geometry()
centroid = polys.centroid()
lng, lat = centroid.getInfo()['coordinates']
# print("lng = {}, lat = {}".format(lng, lat))
# Map.setCenter(lng, lat, 10)
# Map.addLayer(fromFT)
count = fromFT.size().getInfo()
Map.setCenter(lng, lat, 10)
for i in range(2, 2 + count):
fc = fromFT.filter(ee.Filter.eq('system:index', str(i)))
Map.addLayer(fc)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
|
jupyter36/07_Input_Output.ipynb
|
###Markdown
Python for Finance **Analyze Big Financial Data**O'Reilly (2014)Yves Hilpisch **Buy the book ** |O'Reilly |Amazon**All book codes & IPYNBs** |http://oreilly.quant-platform.com**The Python Quants GmbH** | http://tpq.io**Contact us** | [email protected] Input-Output Operations
###Code
from pylab import plt
plt.style.use('seaborn')
import matplotlib as mpl
mpl.rcParams['font.family'] = 'serif'
###Output
_____no_output_____
###Markdown
Basic I/O with Python Writing Objects to Disk
###Code
path = '/Users/yves/Documents/Temp/data/' # choose a path to your liking
import numpy as np
from random import gauss
a = [gauss(1.5, 2) for i in range(1000000)]
# generation of normally distributed randoms
import pickle
pkl_file = open(path + 'data.pkl', 'wb')
# open file for writing
# Note: existing file might be overwritten
%time pickle.dump(a, pkl_file)
pkl_file
pkl_file.close()
ll $path*
pkl_file = open(path + 'data.pkl', 'rb') # open file for reading
%time b = pickle.load(pkl_file)
b[:5]
a[:5]
np.allclose(np.array(a), np.array(b))
np.sum(np.array(a) - np.array(b))
pkl_file = open(path + 'data.pkl', 'wb') # open file for writing
%time pickle.dump(np.array(a), pkl_file)
%time pickle.dump(np.array(a) ** 2, pkl_file)
pkl_file.close()
ll $path*
pkl_file = open(path + 'data.pkl', 'rb') # open file for reading
x = pickle.load(pkl_file)
x
y = pickle.load(pkl_file)
y
pkl_file.close()
pkl_file = open(path + 'data.pkl', 'wb') # open file for writing
pickle.dump({'x' : x, 'y' : y}, pkl_file)
pkl_file.close()
pkl_file = open(path + 'data.pkl', 'rb') # open file for writing
data = pickle.load(pkl_file)
pkl_file.close()
for key in data.keys():
print(key, data[key][:4])
!rm -f $path*
###Output
_____no_output_____
###Markdown
Reading and Writing Text Files
###Code
rows = 5000
a = np.random.standard_normal((rows, 5)) # dummy data
a.round(4)
import pandas as pd
t = pd.date_range(start='2014/1/1', periods=rows, freq='H')
# set of hourly datetime objects
t
csv_file = open(path + 'data.csv', 'w') # open file for writing
header = 'date,no1,no2,no3,no4,no5\n'
csv_file.write(header)
for t_, (no1, no2, no3, no4, no5) in zip(t, a):
s = '%s,%f,%f,%f,%f,%f\n' % (t_, no1, no2, no3, no4, no5)
csv_file.write(s)
csv_file.close()
ll $path*
csv_file = open(path + 'data.csv', 'r') # open file for reading
for i in range(5):
print(csv_file.readline(), end='')
csv_file = open(path + 'data.csv', 'r')
content = csv_file.readlines()
for line in content[:5]:
print(line, end='')
csv_file.close()
!rm -f $path*
###Output
_____no_output_____
###Markdown
SQL Databases
###Code
import sqlite3 as sq3
query = 'CREATE TABLE numbs (Date date, No1 real, No2 real)'
con = sq3.connect(path + 'numbs.db')
con.execute(query)
con.commit()
import datetime as dt
con.execute('INSERT INTO numbs VALUES(?, ?, ?)',
(dt.datetime.now(), 0.12, 7.3))
data = np.random.standard_normal((10000, 2)).round(5)
for row in data:
con.execute('INSERT INTO numbs VALUES(?, ?, ?)',
(dt.datetime.now(), row[0], row[1]))
con.commit()
con.execute('SELECT * FROM numbs').fetchmany(10)
pointer = con.execute('SELECT * FROM numbs')
for i in range(3):
print(pointer.fetchone())
con.close()
!rm -f $path*
###Output
_____no_output_____
###Markdown
Writing and Reading Numpy Arrays
###Code
import numpy as np
dtimes = np.arange('2015-01-01 10:00:00', '2021-12-31 22:00:00',
dtype='datetime64[m]') # minute intervals
len(dtimes)
dty = np.dtype([('Date', 'datetime64[m]'), ('No1', 'f'), ('No2', 'f')])
data = np.zeros(len(dtimes), dtype=dty)
data['Date'] = dtimes
a = np.random.standard_normal((len(dtimes), 2)).round(5)
data['No1'] = a[:, 0]
data['No2'] = a[:, 1]
%time np.save(path + 'array', data) # suffix .npy is added
ll $path*
%time np.load(path + 'array.npy')
data = np.random.standard_normal((10000, 6000))
%time np.save(path + 'array', data)
ll $path*
%time np.load(path + 'array.npy')
data = 0.0
!rm -f $path*
###Output
_____no_output_____
###Markdown
I/O with pandas
###Code
import numpy as np
import pandas as pd
data = np.random.standard_normal((1000000, 5)).round(5)
# sample data set
filename = path + 'numbs'
###Output
_____no_output_____
###Markdown
SQL Database
###Code
import sqlite3 as sq3
query = 'CREATE TABLE numbers (No1 real, No2 real,\
No3 real, No4 real, No5 real)'
con = sq3.Connection(filename + '.db')
con.execute(query)
%%time
con.executemany('INSERT INTO numbers VALUES (?, ?, ?, ?, ?)', data)
con.commit()
ll $path*
%%time
temp = con.execute('SELECT * FROM numbers').fetchall()
print(temp[:2])
temp = 0.0
%%time
query = 'SELECT * FROM numbers WHERE No1 > 0 AND No2 < 0'
res = np.array(con.execute(query).fetchall()).round(3)
res = res[::100] # every 100th result
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(res[:, 0], res[:, 1], 'ro')
plt.grid(True); plt.xlim(-0.5, 4.5); plt.ylim(-4.5, 0.5)
# tag: scatter_query
# title: Plot of the query result
# size: 60
###Output
_____no_output_____
###Markdown
From SQL to pandas
###Code
%time data = pd.read_sql('SELECT * FROM numbers', con)
data.head()
%time data[(data['No1'] > 0) & (data['No2'] < 0)].head()
%%time
res = data[['No1', 'No2']][((data['No1'] > 0.5) | (data['No1'] < -0.5))
& ((data['No2'] < -1) | (data['No2'] > 1))]
plt.plot(res.No1, res.No2, 'ro')
plt.grid(True); plt.axis('tight')
# tag: data_scatter_1
# title: Scatter plot of complex query results
# size: 55
h5s = pd.HDFStore(filename + '.h5s', 'w')
%time h5s['data'] = data
h5s
h5s.close()
%%time
h5s = pd.HDFStore(filename + '.h5s', 'r')
temp = h5s['data']
h5s.close()
np.allclose(np.array(temp), np.array(data))
temp = 0.0
ll $path*
###Output
-rw-r--r-- 1 yves staff 52633600 Nov 18 11:19 /Users/yves/Documents/Temp/data/numbs.db
-rw-r--r-- 1 yves staff 48007192 Nov 18 11:19 /Users/yves/Documents/Temp/data/numbs.h5s
###Markdown
Data as CSV File
###Code
%time data.to_csv(filename + '.csv')
ll $path
%%time
pd.read_csv(filename + '.csv')[['No1', 'No2',
'No3', 'No4']].hist(bins=20);
# tag: data_hist_3
# title: Histogram of 4 data set
###Output
CPU times: user 2.03 s, sys: 222 ms, total: 2.26 s
Wall time: 2.64 s
###Markdown
Data as Excel File
###Code
%time data[:100000].to_excel(filename + '.xlsx')
%time pd.read_excel(filename + '.xlsx', 'Sheet1').cumsum().plot()
# tag: data_paths
# title: Paths of random data from Excel file
# size: 60
ll $path*
rm -f $path*
###Output
_____no_output_____
###Markdown
Fast I/O with PyTables
###Code
import numpy as np
import tables as tb
import datetime as dt
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Working with Tables
###Code
filename = path + 'tab.h5'
h5 = tb.open_file(filename, 'w')
rows = 2000000
row_des = {
'Date': tb.StringCol(26, pos=1),
'No1': tb.IntCol(pos=2),
'No2': tb.IntCol(pos=3),
'No3': tb.Float64Col(pos=4),
'No4': tb.Float64Col(pos=5)
}
filters = tb.Filters(complevel=0) # no compression
tab = h5.create_table('/', 'ints_floats', row_des,
title='Integers and Floats',
expectedrows=rows, filters=filters)
tab
pointer = tab.row
ran_int = np.random.randint(0, 10000, size=(rows, 2))
ran_flo = np.random.standard_normal((rows, 2)).round(5)
%%time
for i in range(rows):
pointer['Date'] = dt.datetime.now()
pointer['No1'] = ran_int[i, 0]
pointer['No2'] = ran_int[i, 1]
pointer['No3'] = ran_flo[i, 0]
pointer['No4'] = ran_flo[i, 1]
pointer.append()
# this appends the data and
# moves the pointer one row forward
tab.flush()
tab
ll $path*
dty = np.dtype([('Date', 'S26'), ('No1', '<i4'), ('No2', '<i4'),
('No3', '<f8'), ('No4', '<f8')])
sarray = np.zeros(len(ran_int), dtype=dty)
sarray
%%time
sarray['Date'] = dt.datetime.now()
sarray['No1'] = ran_int[:, 0]
sarray['No2'] = ran_int[:, 1]
sarray['No3'] = ran_flo[:, 0]
sarray['No4'] = ran_flo[:, 1]
%%time
h5.create_table('/', 'ints_floats_from_array', sarray,
title='Integers and Floats',
expectedrows=rows, filters=filters)
h5
h5.remove_node('/', 'ints_floats_from_array')
tab[:3]
tab[:4]['No4']
%time np.sum(tab[:]['No3'])
%time np.sum(np.sqrt(tab[:]['No1']))
%%time
plt.hist(tab[:]['No3'], bins=30)
plt.grid(True)
print(len(tab[:]['No3']))
# tag: data_hist
# title: Histogram of data
# size: 60
%%time
res = np.array([(row['No3'], row['No4']) for row in
tab.where('((No3 < -0.5) | (No3 > 0.5)) \
& ((No4 < -1) | (No4 > 1))')])[::100]
plt.plot(res.T[0], res.T[1], 'ro')
plt.grid(True)
# tag: scatter_data
# title: Scatter plot of query result
# size: 70
%%time
values = tab.cols.No3[:]
print("Max %18.3f" % values.max())
print("Ave %18.3f" % values.mean())
print("Min %18.3f" % values.min())
print("Std %18.3f" % values.std())
%%time
results = [(row['No1'], row['No2']) for row in
tab.where('((No1 > 9800) | (No1 < 200)) \
& ((No2 > 4500) & (No2 < 5500))')]
for res in results[:4]:
print(res)
%%time
results = [(row['No1'], row['No2']) for row in
tab.where('(No1 == 1234) & (No2 > 9776)')]
for res in results:
print(res)
###Output
(1234, 9855)
(1234, 9854)
(1234, 9960)
(1234, 9910)
(1234, 9980)
CPU times: user 56.8 ms, sys: 44.2 ms, total: 101 ms
Wall time: 86.7 ms
###Markdown
Working with Compressed Tables
###Code
filename = path + 'tab.h5c'
h5c = tb.open_file(filename, 'w')
filters = tb.Filters(complevel=4, complib='blosc')
tabc = h5c.create_table('/', 'ints_floats', sarray,
title='Integers and Floats',
expectedrows=rows, filters=filters)
%%time
res = np.array([(row['No3'], row['No4']) for row in
tabc.where('((No3 < -0.5) | (No3 > 0.5)) \
& ((No4 < -1) | (No4 > 1))')])[::100]
%time arr_non = tab.read()
%time arr_com = tabc.read()
ll $path*
h5c.close()
###Output
_____no_output_____
###Markdown
Working with Arrays
###Code
%%time
arr_int = h5.create_array('/', 'integers', ran_int)
arr_flo = h5.create_array('/', 'floats', ran_flo)
h5
ll $path*
h5.close()
!rm -f $path*
###Output
_____no_output_____
###Markdown
Out-of-Memory Computations
###Code
filename = path + 'array.h5'
h5 = tb.open_file(filename, 'w')
n = 100
ear = h5.create_earray(h5.root, 'ear',
atom=tb.Float64Atom(),
shape=(0, n))
%%time
rand = np.random.standard_normal((n, n))
for i in range(750):
ear.append(rand)
ear.flush()
ear
ear.size_on_disk
out = h5.create_earray(h5.root, 'out',
atom=tb.Float64Atom(),
shape=(0, n))
expr = tb.Expr('3 * sin(ear) + sqrt(abs(ear))')
# the numerical expression as a string object
expr.set_output(out, append_mode=True)
# target to store results is disk-based array
%time expr.eval()
# evaluation of the numerical expression
# and storage of results in disk-based array
out[0, :10]
%time imarray = ear.read()
# read whole array into memory
import numexpr as ne
expr = '3 * sin(imarray) + sqrt(abs(imarray))'
ne.set_num_threads(16)
%time ne.evaluate(expr)[0, :10]
h5.close()
!rm -f $path*
###Output
_____no_output_____
|
Complete/activationsComplete.ipynb
|
###Markdown
Activation functionsOn this notebook we will take a look at some of the different activation functions present in keras backend and will compare them. The dataWe will use our old friend MNIST for its simplicity. Load the dataset and preprocess it.
###Code
import os, time
import tensorflow as tf
# physical_devices = tf.config.experimental.list_physical_devices('GPU')
# tf.config.experimental.set_memory_growth(physical_devices[0], True)
tf.keras.backend.clear_session()
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
x_train = x_train.reshape(x_train.shape[0], x_train.shape[1]*x_train.shape[2])
x_test = x_test.reshape(x_test.shape[0], x_test.shape[1]*x_test.shape[2])
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
# physical_devices = tf.config.experimental.list_physical_devices('GPU')
# tf.config.experimental.set_memory_growth(physical_devices[0], True)
tf.keras.backend.clear_session()
from tensorflow import keras
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.layers import LeakyReLU
###Output
_____no_output_____
###Markdown
Model ArchitectureLet's build a very simple model on this example. It will consist on:- A dense layer with 512 units, relu activated- A dense layer with the number of classes as the amount of units, softmax activated- Use RMSprop as the optimizer and categorical crossentropy as the loss function. Add accuracy to the metrics Build the model
###Code
num_classes = 10
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 512) 401920
_________________________________________________________________
dense_1 (Dense) (None, 10) 5130
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
###Markdown
Train the model for 5 epochs and with a batch size of 128. Use the test data as validation and evaluate the model. Keep the information in a history variable
###Code
batch_size = 128
epochs = 5
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=100)
print ('Test loss:', round(score[0], 3))
print ('Test accuracy:', round(score[1], 3))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
60000/60000 [==============================] - 3s 50us/sample - loss: 0.2555 - accuracy: 0.9268 - val_loss: 0.1262 - val_accuracy: 0.9612
Epoch 2/5
60000/60000 [==============================] - 3s 43us/sample - loss: 0.1050 - accuracy: 0.9680 - val_loss: 0.0929 - val_accuracy: 0.9710
Epoch 3/5
60000/60000 [==============================] - 3s 50us/sample - loss: 0.0684 - accuracy: 0.9793 - val_loss: 0.0747 - val_accuracy: 0.9764
Epoch 4/5
60000/60000 [==============================] - 3s 43us/sample - loss: 0.0501 - accuracy: 0.9848 - val_loss: 0.0661 - val_accuracy: 0.9803
Epoch 5/5
60000/60000 [==============================] - 3s 46us/sample - loss: 0.0373 - accuracy: 0.9893 - val_loss: 0.0667 - val_accuracy: 0.9790
Test loss: 0.067
Test accuracy: 0.979
###Markdown
Let's now plot the loss for both using matplotlib. Is it nice?
###Code
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
###Output
_____no_output_____
###Markdown
Build networks using all activationsNow let's run the model with all the activations in the list and view the results in tensorboard Let's do precisely that! Hint: remember to add the tensorboard as a callback for the training. Hint2: use the function os.path.join to include the activation name on each model call
###Code
from tensorflow.keras.callbacks import TensorBoard
epochs = 20
log_path = '/home/fer/data/formaciones/afi/tensorboard_log/activations_experiment2'
for activation in [None, 'sigmoid', 'tanh', 'relu']:
# build and compile the model
model = Sequential()
model.add(Dense(512, activation=activation, input_shape=(784,)))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
tensorboard = TensorBoard(os.path.join(log_path,f'{activation}_{time.time()}'))
# fit the model, adding the tensorboard to the callbacks
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test),
callbacks=[tensorboard])
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.3829 - accuracy: 0.8866 - val_loss: 0.3203 - val_accuracy: 0.9056
Epoch 2/20
60000/60000 [==============================] - 2s 41us/sample - loss: 0.3094 - accuracy: 0.9136 - val_loss: 0.3023 - val_accuracy: 0.9161
Epoch 3/20
60000/60000 [==============================] - 2s 42us/sample - loss: 0.2959 - accuracy: 0.9173 - val_loss: 0.3059 - val_accuracy: 0.9137
Epoch 4/20
60000/60000 [==============================] - 2s 42us/sample - loss: 0.2874 - accuracy: 0.9201 - val_loss: 0.2831 - val_accuracy: 0.9246
Epoch 5/20
60000/60000 [==============================] - 3s 47us/sample - loss: 0.2829 - accuracy: 0.9214 - val_loss: 0.3154 - val_accuracy: 0.9123
Epoch 6/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.2782 - accuracy: 0.9227 - val_loss: 0.2835 - val_accuracy: 0.9207
Epoch 7/20
60000/60000 [==============================] - 3s 45us/sample - loss: 0.2756 - accuracy: 0.9223 - val_loss: 0.2880 - val_accuracy: 0.9204
Epoch 8/20
60000/60000 [==============================] - 3s 42us/sample - loss: 0.2734 - accuracy: 0.9245 - val_loss: 0.2928 - val_accuracy: 0.9202
Epoch 9/20
60000/60000 [==============================] - 3s 44us/sample - loss: 0.2710 - accuracy: 0.9259 - val_loss: 0.2921 - val_accuracy: 0.9232
Epoch 10/20
60000/60000 [==============================] - 3s 44us/sample - loss: 0.2686 - accuracy: 0.9251 - val_loss: 0.2876 - val_accuracy: 0.9227
Epoch 11/20
60000/60000 [==============================] - 3s 45us/sample - loss: 0.2672 - accuracy: 0.9254 - val_loss: 0.2856 - val_accuracy: 0.9242
Epoch 12/20
60000/60000 [==============================] - 3s 46us/sample - loss: 0.2648 - accuracy: 0.9263 - val_loss: 0.2914 - val_accuracy: 0.9236
Epoch 13/20
60000/60000 [==============================] - 3s 47us/sample - loss: 0.2652 - accuracy: 0.9251 - val_loss: 0.2862 - val_accuracy: 0.9240
Epoch 14/20
60000/60000 [==============================] - 3s 45us/sample - loss: 0.2626 - accuracy: 0.9273 - val_loss: 0.2949 - val_accuracy: 0.9214
Epoch 15/20
60000/60000 [==============================] - 3s 44us/sample - loss: 0.2619 - accuracy: 0.9269 - val_loss: 0.2873 - val_accuracy: 0.9216
Epoch 16/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.2605 - accuracy: 0.9274 - val_loss: 0.2795 - val_accuracy: 0.9240
Epoch 17/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.2601 - accuracy: 0.9284 - val_loss: 0.2887 - val_accuracy: 0.9210
Epoch 18/20
60000/60000 [==============================] - 3s 47us/sample - loss: 0.2585 - accuracy: 0.9281 - val_loss: 0.2920 - val_accuracy: 0.9213
Epoch 19/20
60000/60000 [==============================] - 3s 46us/sample - loss: 0.2594 - accuracy: 0.9285 - val_loss: 0.2871 - val_accuracy: 0.9232
Epoch 20/20
60000/60000 [==============================] - 3s 46us/sample - loss: 0.2566 - accuracy: 0.9289 - val_loss: 0.2844 - val_accuracy: 0.9254
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 3s 54us/sample - loss: 0.4218 - accuracy: 0.8852 - val_loss: 0.2587 - val_accuracy: 0.9237
Epoch 2/20
60000/60000 [==============================] - 3s 48us/sample - loss: 0.2352 - accuracy: 0.9313 - val_loss: 0.1974 - val_accuracy: 0.9414
Epoch 3/20
60000/60000 [==============================] - 3s 47us/sample - loss: 0.1777 - accuracy: 0.9483 - val_loss: 0.1528 - val_accuracy: 0.9540
Epoch 4/20
60000/60000 [==============================] - 3s 50us/sample - loss: 0.1386 - accuracy: 0.9597 - val_loss: 0.1280 - val_accuracy: 0.9620
Epoch 5/20
60000/60000 [==============================] - 3s 50us/sample - loss: 0.1122 - accuracy: 0.9670 - val_loss: 0.1128 - val_accuracy: 0.9667
Epoch 6/20
60000/60000 [==============================] - 3s 50us/sample - loss: 0.0930 - accuracy: 0.9733 - val_loss: 0.1011 - val_accuracy: 0.9682
Epoch 7/20
60000/60000 [==============================] - 4s 67us/sample - loss: 0.0781 - accuracy: 0.9774 - val_loss: 0.0912 - val_accuracy: 0.9710
Epoch 8/20
60000/60000 [==============================] - 4s 68us/sample - loss: 0.0668 - accuracy: 0.9805 - val_loss: 0.0865 - val_accuracy: 0.9732
Epoch 9/20
60000/60000 [==============================] - 4s 72us/sample - loss: 0.0569 - accuracy: 0.9833 - val_loss: 0.0778 - val_accuracy: 0.9766
Epoch 10/20
60000/60000 [==============================] - 4s 72us/sample - loss: 0.0494 - accuracy: 0.9857 - val_loss: 0.0718 - val_accuracy: 0.9783
Epoch 11/20
60000/60000 [==============================] - 4s 65us/sample - loss: 0.0430 - accuracy: 0.9873 - val_loss: 0.0706 - val_accuracy: 0.9775
Epoch 12/20
60000/60000 [==============================] - 4s 60us/sample - loss: 0.0375 - accuracy: 0.9890 - val_loss: 0.0683 - val_accuracy: 0.9799
Epoch 13/20
60000/60000 [==============================] - 5s 76us/sample - loss: 0.0327 - accuracy: 0.9910 - val_loss: 0.0673 - val_accuracy: 0.9796
Epoch 14/20
60000/60000 [==============================] - 3s 55us/sample - loss: 0.0288 - accuracy: 0.9919 - val_loss: 0.0634 - val_accuracy: 0.9813
Epoch 15/20
60000/60000 [==============================] - 4s 58us/sample - loss: 0.0249 - accuracy: 0.9935 - val_loss: 0.0619 - val_accuracy: 0.9808
Epoch 16/20
60000/60000 [==============================] - 3s 51us/sample - loss: 0.0215 - accuracy: 0.9947 - val_loss: 0.0622 - val_accuracy: 0.9816
Epoch 17/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.0190 - accuracy: 0.9953 - val_loss: 0.0627 - val_accuracy: 0.9818
Epoch 18/20
60000/60000 [==============================] - 3s 51us/sample - loss: 0.0163 - accuracy: 0.9962 - val_loss: 0.0647 - val_accuracy: 0.9810
Epoch 19/20
60000/60000 [==============================] - 3s 52us/sample - loss: 0.0143 - accuracy: 0.9968 - val_loss: 0.0659 - val_accuracy: 0.9810
Epoch 20/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.0123 - accuracy: 0.9973 - val_loss: 0.0653 - val_accuracy: 0.9805
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 3s 58us/sample - loss: 0.3361 - accuracy: 0.9004 - val_loss: 0.2378 - val_accuracy: 0.9290
Epoch 2/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.1752 - accuracy: 0.9488 - val_loss: 0.1313 - val_accuracy: 0.9614
Epoch 3/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.1164 - accuracy: 0.9655 - val_loss: 0.1088 - val_accuracy: 0.9682
Epoch 4/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.0853 - accuracy: 0.9745 - val_loss: 0.0857 - val_accuracy: 0.9739
Epoch 5/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.0655 - accuracy: 0.9807 - val_loss: 0.0791 - val_accuracy: 0.9764
Epoch 6/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.0510 - accuracy: 0.9845 - val_loss: 0.0710 - val_accuracy: 0.9787
Epoch 7/20
60000/60000 [==============================] - 3s 50us/sample - loss: 0.0402 - accuracy: 0.9882 - val_loss: 0.0731 - val_accuracy: 0.9780
Epoch 8/20
60000/60000 [==============================] - 3s 50us/sample - loss: 0.0317 - accuracy: 0.9904 - val_loss: 0.0689 - val_accuracy: 0.9789
Epoch 9/20
60000/60000 [==============================] - 3s 51us/sample - loss: 0.0251 - accuracy: 0.9930 - val_loss: 0.0670 - val_accuracy: 0.9780
Epoch 10/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.0198 - accuracy: 0.9944 - val_loss: 0.0612 - val_accuracy: 0.9817
Epoch 11/20
60000/60000 [==============================] - 3s 53us/sample - loss: 0.0154 - accuracy: 0.9963 - val_loss: 0.0638 - val_accuracy: 0.9806
Epoch 12/20
60000/60000 [==============================] - 3s 50us/sample - loss: 0.0122 - accuracy: 0.9969 - val_loss: 0.0608 - val_accuracy: 0.9828
Epoch 13/20
60000/60000 [==============================] - 3s 51us/sample - loss: 0.0095 - accuracy: 0.9977 - val_loss: 0.0677 - val_accuracy: 0.9805
Epoch 14/20
60000/60000 [==============================] - 3s 49us/sample - loss: 0.0074 - accuracy: 0.9985 - val_loss: 0.0642 - val_accuracy: 0.9825
|
my_notes/chapter05_notes.ipynb
|
###Markdown
Activity 14
###Code
class Polygon:
"""A class to capture common utilities for dealing with shapes"""
def __init__(self, side_lengths):
self.side_lengths = side_lengths
def __str__(self):
return f'Polygon with {self.side_lengths} sides'
@property
def num_sides(self):
return len(self.side_lengths)
@property
def perimeter(self):
return sum(self.side_lengths)
class Rectangle(Polygon):
def __init__(self, height, width):
super().__init__([height, width, height, width])
@property
def area(self):
return self.side_lengths[0] * self.side_lengths[1]
class Square(Rectangle):
def __init__(self, height):
super().__init__(height, height)
r = Rectangle(1, 5)
r.area, r.perimeter
s = Square(5)
s.area, s.perimeter
###Output
_____no_output_____
|
examples/gallery/demos/bokeh/bars_economic.ipynb
|
###Markdown
Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib - bars_economic](../matplotlib/bars_economic.ipynb)
###Code
import pandas as pd
import holoviews as hv
hv.extension('bokeh','matplotlib')
###Output
_____no_output_____
###Markdown
Declaring data
###Code
macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\t')
key_dimensions = [('year', 'Year'), ('country', 'Country')]
value_dimensions = [('unem', 'Unemployment'), ('capmob', 'Capital Mobility'),
('gdp', 'GDP Growth'), ('trade', 'Trade')]
macro = hv.Table(macro_df, key_dimensions, value_dimensions)
###Output
_____no_output_____
###Markdown
Plot
###Code
%%opts Bars [stack_index=1 xrotation=90 width=600 show_legend=False tools=['hover']]
%%opts Bars (color=Cycle('Category20'))
macro.to.bars([ 'Year', 'Country'], 'Trade', [])
###Output
_____no_output_____
###Markdown
Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib - bars_economic](../matplotlib/bars_economic.ipynb)
###Code
import pandas as pd
import holoviews as hv
from holoviews import opts
hv.extension('bokeh','matplotlib')
###Output
_____no_output_____
###Markdown
Declaring data
###Code
macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\t')
key_dimensions = [('year', 'Year'), ('country', 'Country')]
value_dimensions = [('unem', 'Unemployment'), ('capmob', 'Capital Mobility'),
('gdp', 'GDP Growth'), ('trade', 'Trade')]
macro = hv.Table(macro_df, key_dimensions, value_dimensions)
###Output
_____no_output_____
###Markdown
Plot
###Code
bars = macro.to.bars(['Year', 'Country'], 'Trade', [])
bars.opts(
opts.Bars(color=hv.Cycle('Category20'), show_legend=False, stacked=True,
tools=['hover'], width=600, xrotation=90))
###Output
_____no_output_____
###Markdown
Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib - bars_economic](../matplotlib/bars_economic.ipynb)
###Code
import numpy as np
import pandas as pd
import holoviews as hv
hv.extension('bokeh','matplotlib')
###Output
_____no_output_____
###Markdown
Declaring data
###Code
macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\t')
key_dimensions = [('year', 'Year'), ('country', 'Country')]
value_dimensions = [('unem', 'Unemployment'), ('capmob', 'Capital Mobility'),
('gdp', 'GDP Growth'), ('trade', 'Trade')]
macro = hv.Table(macro_df, kdims=key_dimensions, vdims=value_dimensions)
###Output
_____no_output_____
###Markdown
Plot
###Code
%%opts Bars [stack_index=1 xrotation=90 legend_cols=7 show_legend=False show_frame=False tools=['hover']]
%%opts Bars (color=Cycle('Category20'))
macro.to.bars([ 'Year', 'Country'], 'Trade', [])
###Output
_____no_output_____
###Markdown
Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib - bars_economic](../matplotlib/bars_economic.ipynb)
###Code
import numpy as np
import pandas as pd
import holoviews as hv
hv.extension('bokeh','matplotlib')
###Output
_____no_output_____
###Markdown
Declaring data
###Code
macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\t')
key_dimensions = [('year', 'Year'), ('country', 'Country')]
value_dimensions = [('unem', 'Unemployment'), ('capmob', 'Capital Mobility'),
('gdp', 'GDP Growth'), ('trade', 'Trade')]
macro = hv.Table(macro_df, key_dimensions, value_dimensions)
###Output
_____no_output_____
###Markdown
Plot
###Code
%%opts Bars [stack_index=1 xrotation=90 width=600 show_legend=False tools=['hover']]
%%opts Bars (color=Cycle('Category20'))
macro.to.bars([ 'Year', 'Country'], 'Trade', [])
###Output
_____no_output_____
###Markdown
Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib - bars_economic](../matplotlib/bars_economic.ipynb)
###Code
import pandas as pd
import holoviews as hv
from holoviews import opts
hv.extension('bokeh','matplotlib')
###Output
_____no_output_____
###Markdown
Declaring data
###Code
macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\t')
key_dimensions = [('year', 'Year'), ('country', 'Country')]
value_dimensions = [('unem', 'Unemployment'), ('capmob', 'Capital Mobility'),
('gdp', 'GDP Growth'), ('trade', 'Trade')]
macro = hv.Table(macro_df, key_dimensions, value_dimensions)
###Output
_____no_output_____
###Markdown
Plot
###Code
bars = macro.to.bars(['Year', 'Country'], 'Trade', [])
bars.options(
opts.Bars(color=hv.Cycle('Category20'), show_legend=False, stacked=True,
tools=['hover'], width=600, xrotation=90))
###Output
_____no_output_____
|
qiskit-textbook/content/ch-quantum-hardware/error-correction-repetition-code.ipynb
|
###Markdown
Introduction to Quantum Error Correction via the Repetition Code IntroductionQuantum computing requires us to encode information in qubits. Most quantum algorithms developed over the past few decades have assumed that these qubits are perfect: they can be prepared in any state we desire, and be manipulated with complete precision. Qubits that obey these assumptions are often known as *logical qubits*.The last few decades have also seen great advances in finding physical systems that behave as qubits, with better quality qubits being developed all the time. However, the imperfections can never be removed entirely. These qubits will always be much too imprecise to serve directly as logical qubits. Instead, we refer to them as *physical qubits*.In the current era of quantum computing, we seek to use physical qubits despite their imperfections, by designing custom algorithms and using error mitigation effects. For the future era of fault-tolerance, however, we must find ways to build logical qubits from physical qubits. This will be done through the process of quantum error correction, in which logical qubits are encoded in a a large numbers of physical qubits. The encoding is maintained by constantly putting the physical qubits through a highly entangling circuit. Auxilliary degrees of freedom are also constantly measured, to detect signs of errors and allow their effects to be removed. The operations on the logical qubits required to implement quantum computation will be performed by essentially making small perturbations to this procedure.Because of the vast amount effort required for this process, most operations performed in fault-tolerant quantum computers will be done to serve the purpose of error detection and correction. So when benchmarking our progress towards fault-tolerant quantum computation, we must keep track of how well our devices perform error correction.In this chapter we will look at a particular example of error correction: the repetition code. Though not a true example of quantum error correction - it uses physical qubits to encode a logical *bit*, rather than a qubit - it serves as a simple guide to all the basic concepts in any quantum error correcting code. We will also see how it can be run on current prototype devices. Introduction to the repetition code The basics of error correctionThe basic ideas behind error correction are the same for classical information as for classical information. This allows us to begin by considering a very straightforward example: speaking on the phone. If someone asks you a question to which the answer is 'yes' or 'no', the way you give your response will depend on two factors:* How important is it that you are understood correctly?* How good is your connection?Both of these can be paramaterized with probabilities. For the first, we can use $P_a$, the maximum acceptable probability of being misunderstood. If you are being asked to confirm a preference for ice cream flavours, and don't mind too much if you get vanilla rather than chocolate, $P_a$ might be quite high. If you are being asked a question on which someone's life depends, however, $P_a$ will be much lower.For the second we can use $p$, the probability that your answer is garbled by a bad connectiom. For simplicity, let's imagine a case where a garbled 'yes' doesn't simply sound like nonsense, but sounds like a 'no'. And similarly a 'no' is transformed into 'yes'. Then $p$ is the probability that you are completely misunderstood.A good connection or a relatively unimportant question will result in $p<P_a$. In this case it is fine to simply answer in the most direct way possible: you just say 'yes' or 'no'.If, however, your connection is poor and your answer is important, we will have $p>P_a$. A single 'yes' or 'no' is not enough in this case. The probability of being misunderstood would be too high. Instead we must encode our answer in a more complex structure, allowing the receiver to decode our meaning despite the possibility of the message being disrupted. The simplest method is the one that many would do without thinking: simply repeat the answer many times. For example say 'yes, yes, yes' instead of 'yes' or 'no, no no' instead of 'no'.If the receiver hears 'yes, yes, yes' in this case, they will of course conclude that the sender meant 'yes'. If they hear 'no, yes, yes', 'yes, no, yes' or 'yes, yes, no', they will probably conclude the same thing, since there is more positivity than negativity in the answer. To be misunderstood in this case, at least two of the replies need to be garbled. The probability for this, $P$, will be less than $p$. When encoded in this way, the message therefore becomes more likely to be understood. The code cell below shows an example of this.
###Code
p = 0.01
P = 3 * p**2 * (1-p) + p**3 # probability of 2 or 3 errors
print('Probability of a single reply being garbled:',p)
print('Probability of a the majority of three replies being garbled:',P)
###Output
Probability of a single reply being garbled: 0.01
Probability of a the majority of three replies being garbled: 0.00029800000000000003
###Markdown
If $P<P_a$, this technique solves our problem. If not, we can simply add more repetitions. The fact that $P<p$ above comes from the fact that we need at least two replies to be garbled to flip the majority, and so even the most likely possibilities have a probability of $\sim p^2$. For five repetitions we'd need at least three replies to be garbled to flip the majority, which happens with probability $\sim p^3$. The value for $P$ in this case would then be even lower. Indeed, as we increase the number of repetitions, $P$ will decrease exponentially. No matter how bad the connection, or how certain we need to be of our message getting through correctly, we can acheive it by just repeating our answer enough times.Though this is a simple example, it contains all the aspects of error correction.* There is some information to be sent or stored: In this case, a 'yes' or 'no.* The information is encoded in a larger system to protect it against noise: In this case, by repeating the message.* The information is finally decoded, mitigating for the effects of noise: In this case, by trusting the majority of the transmitted messages.This same encoding scheme can also be used for binary, by simply substituting `0` and `1` for 'yes' and 'no. It can therefore also be easily generalized to qubits by using the states $\left|0\right\rangle$ and $\left|1\right\rangle$. In each case it is known as the *repetition code*. Many other forms of encoding are also possible in both the classical and quantum cases, which outperform the repetition code in many ways. However, its status as the simplest encoding does lend it to certain applications. One is exactly what it is used for in Qiskit: as the first and simplest test of implementing the ideas behind quantum error correction. Correcting errors in qubitsWe will now implement these ideas explicitly using Qiskit. To see the effects of imperfect qubits, we simply can use the qubits of the prototype devices. We can also reproduce the effects in simulations. The function below creates a simple noise models in order to do this. These go beyond the simple case dicussed earlier, of a single noise event which happens with a probability $p$. Instead we consider two forms of error that can occur. One is a gate error: an imperfection in any operation we perform. We model this here in a simple way, using so-called depolarizing noise. The effect of this will be, with probabilty $p_{gate}$ ,to replace the state of any qubit with a completely random state. For two qubit gates, it is applied independently to each qubit. The other form of noise is that for measurement. This simply flips between a `0` to a `1` and vice-versa immediately before measurement with probability $p_{meas}$.
###Code
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors import pauli_error, depolarizing_error
def get_noise(p_meas,p_gate):
error_meas = pauli_error([('X',p_meas), ('I', 1 - p_meas)])
error_gate1 = depolarizing_error(p_gate, 1)
error_gate2 = error_gate1.tensor(error_gate1)
noise_model = NoiseModel()
noise_model.add_all_qubit_quantum_error(error_meas, "measure") # measurement error is applied to measurements
noise_model.add_all_qubit_quantum_error(error_gate1, ["x"]) # single qubit gate error is applied to x gates
noise_model.add_all_qubit_quantum_error(error_gate2, ["cx"]) # two qubit gate error is applied to cx gates
return noise_model
###Output
_____no_output_____
###Markdown
With this we'll now create such a noise model with a probability of $1\%$ for each type of error.
###Code
noise_model = get_noise(0.01,0.01)
###Output
_____no_output_____
###Markdown
Let's see what effect this has when try to store a `0` using three qubits in state $\left|0\right\rangle$. We'll repeat the process `shots=1024` times to see how likely different results are.
###Code
from qiskit import QuantumCircuit, execute, Aer
qc0 = QuantumCircuit(3,3,name='0') # initialize circuit with three qubits in the 0 state
qc0.measure(qc0.qregs[0],qc0.cregs[0]) # measure the qubits
# run the circuit with th noise model and extract the counts
counts = execute( qc0, Aer.get_backend('qasm_simulator'),noise_model=noise_model).result().get_counts()
print(counts)
###Output
{'100': 4, '001': 17, '000': 992, '010': 11}
###Markdown
Here we see that almost all results still come out `'000'`, as they would if there was no noise. Of the remaining possibilities, those with a majority of `0`s are most likely. In total, much less than 100 samples come out with a majority of `1`s. When using this circuit to encode a `0`, this means that $P<1\%$Now let's try the same for storing a `1` using three qubits in state $\left|1\right\rangle$.
###Code
qc1 = QuantumCircuit(3,3,name='0') # initialize circuit with three qubits in the 0 state
qc1.x(qc1.qregs[0]) # flip each 0 to 1
qc1.measure(qc1.qregs[0],qc1.cregs[0]) # measure the qubits
# run the circuit with th noise model and extract the counts
counts = execute( qc1, Aer.get_backend('qasm_simulator'),noise_model=noise_model).result().get_counts()
print(counts)
###Output
{'111': 983, '100': 1, '101': 12, '011': 12, '110': 16}
###Markdown
The number of samples that come out with a majority in the wrong state (`0` in this case) is again much less than 100, so $P<1\%$. Whether we store a `0` or a `1`, we can retrieve the information with a smaller probability of error than either of our sources of noise.This was possible because the noise we considered was relatively weak. As we increase $p_{meas}$ and $p_{gate}$, the higher the probability $P$ will be. The extreme case of this is for either of them to have a $50/50$ chance of applying the bit flip error, `x`. For example, let's run the same circuit as before but with $p_{meas}=0.5$ and $p_{gate}=0$.
###Code
noise_model = get_noise(0.5,0.0)
counts = execute( qc1, Aer.get_backend('qasm_simulator'),noise_model=noise_model).result().get_counts()
print(counts)
###Output
{'000': 128, '001': 127, '111': 115, '101': 133, '110': 140, '100': 127, '011': 126, '010': 128}
###Markdown
With this noise, all outcomes occur with equal probability, with differences in results being due only to statistical noise. No trace of the encoded state remains. This is an important point to consider for error correction: sometimes the noise is too strong to be corrected. The optimal approach is to combine a good way of encoding the information you require, with hardware whose noise is not too strong. Storing qubitsSo far, we have considered cases where there is no delay between encoding and decoding. For qubits, this means that there is no significant amount of time that passes between initializing the circuit, and making the final measurements.However, there are many cases for which there will be a significant delay. As an obvious example, one may wish to encode a quantum state and store it for a long time, like a quantum hard drive. A less obvious but much more important example is performing fault-tolerant quantum computation itself. For this, we need to store quantum states and preserve their integrity during the computation. This must also be done in a way that allows us to manipulate the stored information in any way we need, and which corrects any errors we may introduce when performing the manipulations.In all cases, we need account for the fact that errors do not only occur when something happens (like a gate or measurement), they also occur when the qubits are idle. Such noise is due to the fact that the qubits interact with each other and their environment. The longer we leave our qubits idle for, the greater the effects of this noise becomes. If we leave them for long enough, we'll encounter a situation like the $p_{meas}=0.5$ case above, where the noise is too strong for errors to be reliably corrected.The solution is to keep measuring throughout. No qubit is left idle for too long. Instead, information is constantly being extracted from the system to keep track of the errors that have occurred.For the case of classical information, where we simply wish to store a `0` or `1`, this can be done by just constantly measuring the value of each qubit. By keeping track of when the values change due to noise, we can easily deduce a history of when errors occurred. For quantum information, however, it is not so easy. For example, consider the case that we wish to encode the logical state $\left|+\right\rangle$. Our encoding is such that$$\left|0\right\rangle \rightarrow \left|000\right\rangle,~~~ \left|1\right\rangle \rightarrow \left|111\right\rangle.$$To encode the logical $\left|+\right\rangle$ state we therefore need$$\left|+\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0\right\rangle+\left|1\right\rangle\right)\rightarrow \frac{1}{\sqrt{2}}\left(\left|000\right\rangle+\left|111\right\rangle\right).$$With the repetition encoding that we are using, a z measurement (which distinguishes between the $\left|0\right\rangle$ and $\left|1\right\rangle$ states) of the logical qubit is done using a z measurement of each physical qubit. The final result for the logical measurement is decoded from the physical qubit measurement results by simply looking which output is in the majority.As mentioned earlier, we can keep track of errors on logical qubits that are stored for a long time by constantly performing z measurements of the physical qubits. However, note that this effectively corresponds to constantly peforming z measurements of the physical qubits. This is fine if we are simply storing a `0` or `1`, but it has undesired effects if we are storing a superposition. Specifically: the first time we do such a check for errors, we will collapse the superposition.This is not ideal. If we wanted to do some computation on our logical qubit, or is we wish to peform a basis change before final measurement, we need to preserve the superposition. Destroying it is an error. But this is not an error caused by imperfections in our devices. It is an error that we have introduced as part of our attempts to correct errors. And since we cannot hope to recreate any arbitrary superposition stored in our quantum computer, it is an error than cannot be corrected.For this reason, we must find another way of keeping track of the errors that occur when our logical qubit is stored for long times. This should give us the information we need to detect and correct errors, and to decode the final measurment result with high probability. However, it should not cause uncorrectable errors to occur during the process by collapsing superpositions that we need to preserve.The way to do this is with the following circuit element.
###Code
from qiskit import QuantumRegister, ClassicalRegister
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
cq = QuantumRegister(2,'code\ qubit\ ')
lq = QuantumRegister(1,'ancilla\ qubit\ ')
sb = ClassicalRegister(1,'syndrome\ bit\ ')
qc = QuantumCircuit(cq,lq,sb)
qc.cx(cq[0],lq[0])
qc.cx(cq[1],lq[0])
qc.measure(lq,sb)
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Here we have three physical qubits. Two are called 'code qubits', and the other is called an 'ancilla qubit'. One bit of output is extracted, called the syndrome bit. The ancilla qubit is always initialized in state $\left|0\right\rangle$. The code qubits, however, can be initialized in different states. To see what affect different inputs have on the output, we can create a circuit `qc_init` that prepares the code qubits in some state, and then run the circuit `qc_init+qc`.First, the trivial case: `qc_init` does nothing, and so the code qubits are initially $\left|00\right\rangle$.
###Code
qc_init = QuantumCircuit(cq)
(qc_init+qc).draw(output='mpl')
counts = execute( qc_init+qc, Aer.get_backend('qasm_simulator')).result().get_counts()
print('Results:',counts)
###Output
Results: {'0': 1024}
###Markdown
The outcome, in all cases, is `0`.Now let's try an initial state of $\left|11\right\rangle$.
###Code
qc_init = QuantumCircuit(cq)
qc_init.x(cq)
(qc_init+qc).draw(output='mpl')
counts = execute( qc_init+qc, Aer.get_backend('qasm_simulator')).result().get_counts()
print('Results:',counts)
###Output
Results: {'0': 1024}
###Markdown
The outcome in this case is also always `0`. Given the linearity of quantum mechanics, we can expect the same to be true also for any superposition of $\left|00\right\rangle$ and $\left|11\right\rangle$, such as the example below.
###Code
qc_init = QuantumCircuit(cq)
qc_init.h(cq[0])
qc_init.cx(cq[0],cq[1])
(qc_init+qc).draw(output='mpl')
counts = execute( qc_init+qc, Aer.get_backend('qasm_simulator')).result().get_counts()
print('Results:',counts)
###Output
Results: {'0': 1024}
###Markdown
The opposite outcome will be found for an initial state of $\left|01\right\rangle$, $\left|10\right\rangle$ or any superposition thereof.
###Code
qc_init = QuantumCircuit(cq)
qc_init.h(cq[0])
qc_init.cx(cq[0],cq[1])
qc_init.x(cq[0])
(qc_init+qc).draw(output='mpl')
counts = execute( qc_init+qc, Aer.get_backend('qasm_simulator')).result().get_counts()
print('Results:',counts)
###Output
Results: {'1': 1024}
###Markdown
In such cases the output is always `'1'`.This measurement is therefore telling us about a collective property of multiple qubits. Specifically, it looks at the two code qubits and determines whether their state is the same or different in the z basis. For basis states that are the same in the z basis, like $\left|00\right\rangle$ and $\left|11\right\rangle$, the measurement simply returns `0`. It also does so for any superposition of these. Since it does not distinguish between these states in any way, it also does not collapse such a superposition.Similarly, For basis states that are different in the z basis it returns a `1`. This occurs for $\left|01\right\rangle$, $\left|10\right\rangle$ or any superposition thereof.Now suppose we apply such a 'syndrome measurement' on all pairs of physical qubits in our repetition code. If their state is described by a repeated $\left|0\right\rangle$, a repeated $\left|1\right\rangle$, or any superposition thereof, all the syndrome measurements will return `0`. Given this result, we will know that our states are indeed encoded in the repeated states that we want them to be, and can deduce that no errors have occurred. If some syndrome measurements return `1`, however, it is a signature of an error. We can therefore use these measurement results to determine how to decode the result. Quantum repetition codeWe now know enough to understand exactly how the quantum version of the repetition code is implementedWe can use it in Qiskit by importing the required tools from Ignis.
###Code
from qiskit.ignis.verification.topological_codes import RepetitionCode
from qiskit.ignis.verification.topological_codes import lookuptable_decoding
from qiskit.ignis.verification.topological_codes import GraphDecoder
###Output
_____no_output_____
###Markdown
We are free to choose how many physical qubits we want the logical qubit to be encoded in. We can also choose how many times the syndrome measurements will be applied while we store our logical qubit, before the final readout measurement. Let us start with the smallest non-trivial case: three repetitions and one syndrome measurement round. The circuits for the repetition code can then be created automatically from the using the `RepetitionCode` object from Qiskit-Ignis.
###Code
n = 3
T = 1
code = RepetitionCode(n,T)
###Output
_____no_output_____
###Markdown
With this we can inspect various properties of the code, such as the names of the qubit registers used for the code and ancilla qubits. The `RepetitionCode` contains two quantum circuits that implement the code: One for each of the two possible logical bit values. Here are those for logical `0` and `1`, respectively.
###Code
# this bit is just needed to make the labels look nice
for reg in code.circuit['0'].qregs+code.circuit['1'].cregs:
reg.name = reg.name.replace('_','\ ') + '\ '
code.circuit['0'].draw(output='mpl')
code.circuit['1'].draw(output='mpl')
###Output
_____no_output_____
###Markdown
In these circuits, we have two types of physical qubits. There are the 'code qubits', which are the three physical qubits across which the logical state is encoded. There are also the 'link qubits', which serve as the ancilla qubits for the syndrome measurements.Our single round of syndrome measurements in these circuits consist of just two syndrome measurements. One compares code qubits 0 and 1, and the other compares code qubits 1 and 2. One might expect that a further measurement, comparing code qubits 0 and 2, should be required to create a full set. However, these two are sufficient. This is because the information on whether 0 and 2 have the same z basis state can be inferred from the same information about 0 and 1 with that for 1 and 2. Indeed, for $n$ qubits, we can get the required information from just $n-1$ syndrome measurements of neighbouring pairs of qubits.Running these circuits on a simulator without any noise leads to very simple results.
###Code
def get_raw_results(code,noise_model=None):
circuits = code.get_circuit_list()
raw_results = {}
for log in range(2):
job = execute( circuits[log], Aer.get_backend('qasm_simulator'), noise_model=noise_model)
raw_results[str(log)] = job.result().get_counts(str(log))
return raw_results
raw_results = get_raw_results(code)
for log in raw_results:
print('Logical',log,':',raw_results[log],'\n')
###Output
Logical 0 : {'000 00': 1024}
Logical 1 : {'111 00': 1024}
###Markdown
Here we see that the output comes in two parts. The part on the right holds the outcomes of the two syndrome measurements. That on the left holds the outcomes of the three final measurements of the code qubits.For more measurement rounds, $T=4$ for example, we would have the results of more syndrome measurements on the right.
###Code
code = RepetitionCode(n,4)
raw_results = get_raw_results(code)
for log in raw_results:
print('Logical',log,':',raw_results[log],'\n')
###Output
Logical 0 : {'000 00 00 00 00': 1024}
Logical 1 : {'111 00 00 00 00': 1024}
###Markdown
For more repetitions, $n=5$ for example, each set of measurements would be larger. The final measurement on the left would be of $n$ qubits. The $T$ syndrome measurements would each be of the $n-1$ possible neighbouring pairs.
###Code
code = RepetitionCode(5,4)
raw_results = get_raw_results(code)
for log in raw_results:
print('Logical',log,':',raw_results[log],'\n')
###Output
Logical 0 : {'00000 0000 0000 0000 0000': 1024}
Logical 1 : {'11111 0000 0000 0000 0000': 1024}
###Markdown
Lookup table decodingNow let's return to the $n=3$, $T=1$ example and look at a case with some noise.
###Code
code = RepetitionCode(3,1)
noise_model = get_noise(0.05,0.05)
raw_results = get_raw_results(code,noise_model)
for log in raw_results:
print('Logical',log,':',raw_results[log],'\n')
###Output
Logical 0 : {'100 01': 4, '000 00': 642, '000 01': 78, '100 10': 5, '110 01': 4, '101 01': 1, '101 00': 1, '000 11': 5, '101 10': 1, '011 00': 6, '001 01': 5, '010 11': 4, '100 00': 46, '110 10': 1, '001 10': 2, '010 01': 24, '110 00': 4, '001 00': 57, '010 10': 6, '010 00': 57, '000 10': 71}
Logical 1 : {'011 11': 4, '100 01': 2, '101 11': 22, '011 01': 4, '111 01': 69, '110 01': 19, '101 01': 26, '101 00': 49, '111 10': 75, '111 11': 6, '101 10': 5, '110 11': 3, '001 11': 4, '011 00': 48, '001 01': 2, '010 11': 1, '100 00': 7, '110 10': 10, '001 10': 2, '010 01': 3, '011 10': 19, '110 00': 35, '001 00': 1, '111 00': 603, '100 10': 2, '010 00': 3}
###Markdown
Here we have created `raw_results`, a dictionary that holds both the results for a circuit encoding a logical `0` and `1` encoded for a logical `1`.Our task when confronted with any of the possible outcomes we see here is to determine what the outcome should have been, if there was no noise. For an outcome of `'000 00'` or `'111 00'`, the answer is obvious. These are the results we just saw for a logical `0` and logical `1`, respectively, when no errors occur. The former is the most common outcome for the logical `0` even with noise, and the latter is the most common for the logical `1`. We will therefore conclude that the outcome was indeed that for logical `0` whenever we encounter `'000 00'`, and the same for logical `1` when we encounter `'111 00'`.Though this tactic is optimal, it can nevertheless fail. Note that `'111 00'` typically occurs in a handful of cases for an encoded `0`, and `'00 00'` similarly occurs for an encoded `1`. In this case, through no fault of our own, we will incorrectly decode the output. In these cases, a large number of errors conspired to make it look like we had a noiseless case of the opposite logical value, and so correction becomes impossible.We can employ a similar tactic to decode all other outcomes. The outcome `'001 00'`, for example, occurs far more for a logical `0` than a logical `1`. This is because it could be caused by just a single measurement error in the former case (which incorrectly reports a single `0` to be `1`), but would require at least two errors in the latter. So whenever we see `'001 00'`, we can decode it as a logical `0`.Applying this tactic over all the strings is a form of so-called 'lookup table decoding'. This is where every possible outcome is analyzed, and the most likely value to decode it as is determined. For many qubits, this quickly becomes intractable, as the number of possible outcomes becomes so large. In these cases, more algorithmic decoders are needed. However, lookup table decoding works well for testing out small codes.We can use tools in Qiskit to implement lookup table decoding for any code. For this we need two sets of results. One is the set of results that we actually want to decode, and for which we want to calcate the probability of incorrect decoding, $P$. We will use the `raw_results` we already have for this.The other set of results is one to be used as the lookup table. This will need to be run for a large number of samples, to ensure that it gets good statistics for each possible outcome. We'll use `shots=10000`.
###Code
circuits = code.get_circuit_list()
table_results = {}
for log in range(2):
job = execute( circuits[log], Aer.get_backend('qasm_simulator'), noise_model=noise_model, shots=10000 )
table_results[str(log)] = job.result().get_counts(str(log))
###Output
_____no_output_____
###Markdown
With this data, which we call `table_results`, we can now use the `lookuptable_decoding` function from Qiskit. This takes each outcome from `raw_results` and decodes it with the information in `table_results`. Then it checks if the decoding was correct, and uses this information to calculate $P$.
###Code
P = lookuptable_decoding(raw_results,table_results)
print('P =',P)
###Output
P = {'0': 0.0181, '1': 0.0193}
###Markdown
Here we see that the values for $P$ are lower than those for $p_{meas}$ and $p_{gate}$, so we get an improvement in the reliability for storing the bit value. Note also that the value of $P$ for an encoded `1` is higher than that for `0`. This is because the encoding of `1` requires the application of `x` gates, which are an additional source of noise. Graph theoretic decodingThe decoding considered above produces the best possible results, and does so without needing to use any details of the code. However, it has a major drawback that counters these advantages: the lookup table grows exponentially large as code size increases. For this reason, decoding is typically done in a more algorithmic manner that takes into account the structure of the code and its resulting syndromes.For the codes of `topological_codes` this structure is revealed using post-processing of the syndromes. Instead of using the form shown above, with the final measurement of the code qubits on the left and the outputs of the syndrome measurement rounds on the right, we use the `process_results` method of the code object to rewrite them in a different form.For example, below is the processed form of a `raw_results` dictionary, in this case for $n=3$ and $T=2$. Only results with 50 or more samples are shown for clarity.
###Code
code = RepetitionCode(3,2)
raw_results = get_raw_results(code,noise_model)
results = code.process_results( raw_results )
for log in ['0','1']:
print('\nLogical ' + log + ':')
print('raw results ', {string:raw_results[log][string] for string in raw_results[log] if raw_results[log][string]>=50 })
print('processed results ', {string:results[log][string] for string in results[log] if results[log][string]>=50 })
###Output
Logical 0:
raw results {'000 10 00': 63, '000 00 00': 490, '000 00 01': 55}
processed results {'0 0 00 10 10': 63, '0 0 00 00 00': 490, '0 0 01 01 00': 55}
Logical 1:
raw results {'111 00 00': 470}
processed results {'1 1 00 00 00': 470}
###Markdown
Here we can see that `'000 00 00'` has been transformed to `'0 0 00 00 00'`, and `'111 00 00'` to `'1 1 00 00 00'`, and so on.In these new strings, the `0 0` to the far left for the logical `0` results and the `1 1` to the far left of the logical `1` results are the logical readout. Any code qubit could be used for this readout, since they should (without errors) all be equal. It would therefore be possible in principle to just have a single `0` or `1` at this position. We could also do as in the original form of the result and have $n$, one for each qubit. Instead we use two, from the two qubits at either end of the line. The reason for this will be shown later. In the absence of errors, these two values will always be equal, since they represent the same encoded bit value.After the logical values follow the $n-1$ results of the syndrome measurements for the first round. A `0` implies that the corresponding pair of qubits have the same value, and `1` implies they they are different from each other. There are $n-1$ results because the line of $d$ code qubits has $n-1$ possible neighboring pairs. In the absence of errors, they will all be `0`. This is exactly the same as the first such set of syndrome results from the original form of the result.The next block is the next round of syndrome results. However, rather than presenting these results directly, it instead gives us the syndrome change between the first and second rounds. It is therefore the bitwise `OR` of the syndrome measurement results from the second round with those from the first. In the absence of errors, they will all be `0`.Any subsequent blocks follow the same formula, though the last of all requires some comment. This is not measured using the standard method (with a link qubit). Instead it is calculated from the final readout measurement of all code qubits. Again it is presented as a syndrome change, and will be all `0` in the absence of errors. This is the $T+1$-th block of syndrome measurements since, as it is not done in the same way as the others, it is not counted among the $T$ syndrome measurement rounds.The following examples further illustrate this convention.**Example 1:** `0 0 0110 0000 0000` represents a $d=5$, $T=2$ repetition code with encoded `0`. The syndrome shows that (most likely) the middle code qubit was flipped by an error before the first measurement round. This causes it to disagree with both neighboring code qubits for the rest of the circuit. This is shown by the syndrome in the first round, but the blocks for subsequent rounds do not report it as it no longer represents a change. Other sets of errors could also have caused this syndrome, but they would need to be more complex and so presumably less likely.**Example 2:** `0 0 0010 0010 0000` represents a $d=5$, $T=2$ repetition code with encoded `0`. Here one of the syndrome measurements reported a difference between two code qubits in the first round, leading to a `1`. The next round did not see the same effect, and so resulted in a `0`. However, since this disagreed with the previous result for the same syndrome measurement, and since we track syndrome changes, this change results in another `1`. Subsequent rounds also do not detect anything, but this no longer represents a change and hence results in a `0` in the same position. Most likely the measurement result leading to the first `1` was an error.**Example 3:** `0 1 0000 0001 0000` represents a $d=5$, $T=2$ repetition code with encoded `1`. A code qubit on the end of the line is flipped before the second round of syndrome measurements. This is detected by only a single syndrome measurement, because it is on the end of the line. For the same reason, it also disturbs one of the logical readouts.Note that in all these examples, a single error causes exactly two characters in the string to change from the value they would have with no errors. This is the defining feature of the convention used to represent stabilizers in `topological_codes`. It is used to define the graph on which the decoding problem is defined. Specifically, the graph is constructed by first taking the circuit encoding logical `0`, for which all bit values in the output string should be `0`. Many copies of this and then created and run on a simulator, with a different single Pauli operator inserted into each. This is done for each of the three types of Pauli operator on each of the qubits and at every circuit depth. The output from each of these circuits can be used to determine the effects of each possible single error. Since the circuit contains only Clifford operations, the simulation can be performed efficiently.In each case, the error will change exactly two of the characters (unless it has no effect). A graph is then constructed for which each bit of the output string corresponds to a node, and the pairs of bits affected by the same error correspond to an edge.The process of decoding a particular output string typically requires the algorithm to deduce which set of errors occured, given the syndrome found in the output string. This can be done by constructing a second graph, containing only nodes that correspond to non-trivial syndrome bits in the output. An edge is then placed between each pair of nodes, with an corresponding weight equal to the length of the minimal path between those nodes in the original graph. A set of errors consistent with the syndrome then corresponds then to finding a perfect matching of this graph. To deduce the most likely set of errors to have occurred, a good tactic would be to find one with the least possible number of errors that is consistent with the observed syndrome. This corresponds to a minimum weight perfect matching of the graph.Using minimal weight perfect matching is a standard decoding technique for the repetition code and surface code, and is implement in Qiskit Ignis. It can also be used in other cases, such as Color codes, but it does not find the best approximation of the most likely set of errors for every code and noise model. For that reason, other decoding technques based on the same graph can be used. The `GraphDecoder` of Qiskit Ignis calculates these graphs for a given code, and will provide a range of methods to analyze it. At time of writing, only minimum weight perfect matching is implemented.Note that, for codes such as the surface code, it is not strictly true than each single error will change the value of only two bits in the output string. A $\sigma^y$ error, for example would flip a pair of values corresponding to two different types of stabilizer, which are typically decoded independently. Output for these codes will therefore be presented in a way that acknowledges this, and analysis of such syndromes will correspondingly create multiple independent graphs to represent the different syndrome types. Running a repetition code benchmarking procedureWe will now run examples of repetition codes on real devices, and use the results as a benchmark. First, we will breifly summarize the process. This applies to this example of the repetition code, but also for other benchmarking procedures in `topological_codes`, and indeed for Qiskit Ignis in general. In each case, the following three-step process is used.1. A task is defined. Qiskit Ignis determines the set of circuits that must be run and creates them.2. The circuits are run. This is typically done using Qiskit. However, in principle any service or experimental equipment could be interfaced.3. Qiskit Ignis is used to process the results from the circuits, to create the output required for the given task.For `topological_codes`, step 1 requires the type and size of quantum error correction code to be chosen. Each type of code has a dedicated Python class. A corresponding object is initialized by providing the paramters required, such as `n` and `T` for a `RepetitionCode` object. The resulting object then contains the circuits corresponding to the given code encoding simple logical qubit states (such as $\left|0\right\rangle$ and $\left|1\right\rangle$), and then running the procedure of error detection for a specified number of rounds, before final readout in a straightforward logical basis (typically a standard $\left|0\right\rangle$/$\left|1\right\rangle$ measurement).For `topological_codes`, the main processing of step 3 is the decoding, which aims to mitigate for any errors in the final readout by using the information obtained from error detection. The optimal algorithm for decoding typically varies between codes. However, codes with similar structure often make use of similar methods.The aim of `topological_codes` is to provide a variety of decoding methods, implemented such that all the decoders can be used on all of the codes. This is done by restricting to codes for which decoding can be described as a graph-theoretic minimization problem. This classic example of such codes are the toric and surface codes. The property is also shared by 2D color codes and matching codes. All of these are prominent examples of so-called topological quantum error correcting codes, which led to the name of the subpackage. However, note that not all topological codes are compatible with such a decoder. Also, some non-topological codes will be compatible, such as the repetition code.The decoding is done by the `GraphDecoder` class. A corresponding object is initialiazed by providing the code object for which the decoding will be performed. This is then used to determine the graph on which the decoding problem will be defined. The results can then be processed using the various methods of the decoder object.In the following we will see the above ideas put into practice for the repetition code. In doing this we will employ two Boolean variables, `step_2` and `step_3`. The variable `step_2` is used to show which parts of the program need to be run when taking data from a device, and `step_3` is used to show the parts which process the resulting data.Both are set to false by default, to ensure that all the program snippets below can be run using only previously collected and processed data. However, to obtain new data one only needs to use `step_2 = True`, and perform decoding on any data one only needs to use `step_3 = True`.
###Code
step_2 = False
step_3 = False
###Output
_____no_output_____
###Markdown
To benchmark a real device we need the tools required to access that device over the cloud, and compile circuits suitable to run on it. These are imported as follows.
###Code
from qiskit import IBMQ
from qiskit.compiler import transpile
from qiskit.transpiler import PassManager
###Output
_____no_output_____
###Markdown
We can now create the backend object, which is used to run the circuits. This is done by supplying the string used to specify the device. Here `'ibmq_16_melbourne'` is used, which has 15 active qubits at time of writing. We will also consider the 53 qubit *Rochester* device, which is specified with `'ibmq_rochester'`.
###Code
device_name = 'ibmq_16_melbourne'
if step_2:
IBMQ.load_account()
for provider in IBMQ.providers():
for potential_backend in provider.backends():
if potential_backend.name()==device_name:
backend = potential_backend
coupling_map = backend.configuration().coupling_map
###Output
_____no_output_____
###Markdown
When running a circuit on a real device, a transpilation process is first implemented. This changes the gates of the circuit into the native gate set implement by the device. In some cases these changes are fairly trivial, such as expressing each Hadamard as a single qubit rotation by the corresponding Euler angles. However, the changes can be more major if the circuit does not respect the connectivity of the device. For example, suppose the circuit requires a controlled-NOT that is not directly implemented by the device. The effect must be then be reproduced with techniques such as using additional controlled-NOT gates to move the qubit states around. As well as introducing additional noise, this also delocalizes any noise already present. A single qubit error in the original circuit could become a multiqubit monstrosity under the action of the additional transpilation. Such non-trivial transpilation must therefore be prevented when running quantum error correction circuits.Tests of the repetition code require qubits to be effectively ordered along a line. The only controlled-NOT gates required are between neighbours along that line. Our first job is therefore to study the coupling map of the device, and find a line.For Melbourne it is possible to find a line that covers all 15 qubits. The choice one specified in the list `line` below is designed to avoid the most error prone `cx` gates. For the 53 qubit *Rochester* device, there is no single line that covers all 53 qubits. Instead we can use the following choice, which covers 43.
###Code
if device_name=='ibmq_16_melbourne':
line = [13,14,0,1,2,12,11,3,4,10,9,5,6,8,7]
elif device_name=='ibmq_rochester':
line = [10,11,17,23,22,21,20,19,16,7,8,9,5]#,0,1,2,3,4,6,13,14,15,18,27,26,25,29,36,37,38,41,50,49,48,47,46,45,44,43,42,39,30,31]
###Output
_____no_output_____
###Markdown
Now we know how many qubits we have access to, we can create the repetition code objects for each code that we will run. Note that a code with `n` repetitions uses $n$ code qubits and $n-1$ link qubits, and so $2n-1$ in all.
###Code
n_min = 3
n_max = int((len(line)+1)/2)
code = {}
for n in range(n_min,n_max+1):
code[n] = RepetitionCode(n,1)
###Output
_____no_output_____
###Markdown
Before running the circuits from these codes, we need to ensure that the transpiler knows which physical qubits on the device it should use. This means using the qubit of `line[0]` to serve as the first code qubit, that of `line[1]` to be the first link qubit, and so on. This is done by the following function, which takes a repetition code object and a `line`, and creates a Python dictionary to specify which qubit of the code corresponds to which element of the line.
###Code
def get_initial_layout(code,line):
initial_layout = {}
for j in range(n):
initial_layout[code.code_qubit[j]] = line[2*j]
for j in range(n-1):
initial_layout[code.link_qubit[j]] = line[2*j+1]
return initial_layout
###Output
_____no_output_____
###Markdown
Now we can transpile the circuits, to create the circuits that will actually be run by the device. A check is also made to ensure that the transpilation indeed has not introduced non-trivial effects by increasing the number of qubits. Furthermore, the compiled circuits are collected into a single list, to allow them all to be submitted at once in the same batch job.
###Code
if step_2:
circuits = []
for n in range(n_min,n_max+1):
initial_layout = get_initial_layout(code[n],line)
for log in ['0','1']:
circuits.append( transpile(code[n].circuit[log], backend=backend, initial_layout=initial_layout) )
num_cx = dict(circuits[-1].count_ops())['cx']
assert num_cx==2*(n-1), str(num_cx) + ' instead of ' + str(2*(n-1)) + ' cx gates for n = ' + str(n)
###Output
_____no_output_____
###Markdown
We are now ready to run the job. As with the simulated jobs considered already, the results from this are extracted into a dictionary `raw_results`. However, in this case it is extended to hold the results from different code sizes. This means that `raw_results[n]` in the following is equivalent to one of the `raw_results` dictionaries used earlier, for a given `n`.
###Code
if step_2:
job = execute(circuits,backend,shots=8192)
raw_results = {}
j = 0
for d in range(n_min,n_max+1):
raw_results[d] = {}
for log in ['0','1']:
raw_results[d][log] = job.result().get_counts(j)
j += 1
###Output
_____no_output_____
###Markdown
It can be convenient to save the data to file, so that the processing of step 3 can be done or repeated at a later time.
###Code
if step_2: # save results
with open('results/raw_results_'+device_name+'.txt', 'w') as file:
file.write(str(raw_results))
elif step_3: # read results
with open('results/raw_results_'+device_name+'.txt', 'r') as file:
raw_results = eval(file.read())
###Output
_____no_output_____
###Markdown
As we saw previously, the process of decoding first needs the results to be rewritten in order for the syndrome to be expressed in the correct form. As such, the `process_results` method of each the repetition code object `code[n]` is used to create determine a results dictionary `results[n]` from each `raw_results[n]`.
###Code
if step_3:
results = {}
for n in range(n_min,n_max+1):
results[n] = code[n].process_results( raw_results[n] )
###Output
_____no_output_____
###Markdown
The decoding also needs us to set up the `GraphDecoder` object for each code. The initialization of these involves the construction of the graph corresponding to the syndrome, as described in the last section.
###Code
if step_3:
dec = {}
for n in range(n_min,n_max+1):
dec[n] = GraphDecoder(code[n])
###Output
_____no_output_____
###Markdown
Finally, the decoder object can be used to process the results. Here the default algorithm, minimim weight perfect matching, is used. The end result is a calculation of the logical error probability. When running step 3, the following snippet also saves the logical error probabilities. Otherwise, it reads in previously saved probabilities.
###Code
if step_3:
logical_prob_match = {}
for n in range(n_min,n_max+1):
logical_prob_match[n] = dec[n].get_logical_prob(results[n])
with open('results/logical_prob_match_'+device_name+'.txt', 'w') as file:
file.write(str(logical_prob_match))
else:
with open('results/logical_prob_match_'+device_name+'.txt', 'r') as file:
logical_prob_match = eval(file.read())
###Output
_____no_output_____
###Markdown
The resulting logical error probabilities are displayed in the following graph, whch uses a log scale used on the y axis. We would expect that the logical error probability decays exponentially with increasing $n$. If this is the case, it is a confirmation that the device is compatible with this basis test of quantum error correction. If not, it implies that the qubits and gates are not sufficiently reliable.Fortunately, the results from IBM Q prototype devices typically do show the expected exponential decay. For the results below, we can see that small codes do represent an exception to this rule. Other deviations can also be expected, such as when the increasing the size of the code means uses a group of qubits with either exceptionally low or high noise.
###Code
import matplotlib.pyplot as plt
import numpy as np
x_axis = range(n_min,n_max+1)
P = { log: [logical_prob_match[n][log] for n in x_axis] for log in ['0', '1'] }
ax = plt.gca()
plt.xlabel('Code distance, n')
plt.ylabel('ln(Logical error probability)')
ax.scatter( x_axis, P['0'], label="logical 0")
ax.scatter( x_axis, P['1'], label="logical 1")
ax.set_yscale('log')
ax.set_ylim(ymax=1.5*max(P['0']+P['1']),ymin=0.75*min(P['0']+P['1']))
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Another insight we can gain is to use the results to determine how likely certain error processes are to occur.To do this we use the fact that each edge in the syndrome graph represents a particular form of error, occuring on a particular qubit at a particular point within the circuit. This is the unique single error that causes the syndrome values corresponding to both of the adjacent nodes to change. Using the results to estimate the probability of such a syndrome therefore allows us to estimate the probability of such an error event. Specifically, to first order it is clear that$$\frac{p}{1-p} \approx \frac{C_{11}}{C_{00}}$$Here $p$ is the probaility of the error corresponding to a particular edge, $C_{11}$ is the number of counts in the `results[n]['0']` correponding to the syndrome value of both adjacent nodes being `1`, and $C_{00}$ is the same for them both being `0`.The decoder object has a method `weight_syndrome_graph` which determines these ratios, and assigns each edge the weight $-\ln(p/(1-p))$. By employing this method and inspecting the weights, we can easily retreive these probabilities.
###Code
if step_3:
dec[n_max].weight_syndrome_graph(results=results[n_max])
probs = []
for edge in dec[n_max].S.edges:
ratio = np.exp(-dec[n_max].S.get_edge_data(edge[0],edge[1])['distance'])
probs.append( ratio/(1+ratio) )
with open('results/probs_'+device_name+'.txt', 'w') as file:
file.write(str(probs))
else:
with open('results/probs_'+device_name+'.txt', 'r') as file:
probs = eval(file.read())
###Output
_____no_output_____
###Markdown
Rather than display the full list, we can obtain a summary via the mean, standard devation, minimum, maximum and quartiles.
###Code
import pandas as pd
pd.Series(probs).describe().to_dict()
###Output
_____no_output_____
###Markdown
The benchmarking of the devices does not produce any set of error probabilities that is exactly equivalent. However, the probabilities for readout errors and controlled-NOT gate errors could serve as a good comparison. Specifically, we can use the `backend` object to obtain these values from the benchmarking.
###Code
if step_3:
gate_probs = []
for j,qubit in enumerate(line):
gate_probs.append( backend.properties().readout_error(qubit) )
cx1,cx2 = 0,0
if j>0:
gate_probs( backend.properties().gate_error('cx',[qubit,line[j-1]]) )
if j<len(line)-1:
gate_probs( backend.properties().gate_error('cx',[qubit,line[j+1]]) )
with open('results/gate_probs_'+device_name+'.txt', 'w') as file:
file.write(str(gate_probs))
else:
with open('results/gate_probs_'+device_name+'.txt', 'r') as file:
gate_probs = eval(file.read())
pd.Series(gate_probs).describe().to_dict()
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
|
epa1361_open_G21/Week 5-6 - robustness and direct search/figs/figs.ipynb
|
###Markdown
todo::* arrows for direction
###Code
import pareto
x = 0.1+0.9*np.random.rand(150, )
y = 1.5*(x-1)**2+0.2*np.random.rand(150, )
data = pd.DataFrame(np.asarray([x,y]).T, columns=['x','y'])
nondominated = pareto.eps_sort([list(data.itertuples(False))], epsilons=[0.05, 0.05], )
nondominated = np.asarray(nondominated)
sns.set_style('white')
fig, ax1 = plt.subplots(figsize=(8,8))
ax1.set_aspect('equal')
ax1.set_ylim(ymin=0)
ax1.set_xlim(xmin=0)
arrow_kwargs = {}
arrow_kwargs.setdefault('overhang', .3)
arrow_kwargs.setdefault('clip_on', False)
arrow_kwargs.update({'length_includes_head': True})
ax1.arrow(1, 0, -0.975, 0, fc='k', lw=1, head_width=.05, **arrow_kwargs)
ax1.arrow(0, 1, 0, -0.975, fc='k', lw=1, head_width=.05, **arrow_kwargs)
for i in np.arange(0, max(np.max(x), np.max(y)), 0.05):
ax1.axhline(i, ls='--', c='lightgrey', lw=1)
ax1.axvline(i, ls='--', c='lightgrey', lw=1)
ax1.scatter(x, y, s=15, color='lightgrey')
ax1.scatter(nondominated[:,0], nondominated[:, 1], s=20)
ax1.set_xlabel("$x$")
ax1.set_ylabel("$y$")
for entry in nondominated:
rectangle = mpl.patches.Rectangle(entry, 1-entry[0], 1-entry[1], ec=(1.0, 0.4980392156862745, 0.054901960784313725),
fc=(1.0, 0.4980392156862745, 0.054901960784313725), zorder=0)
ax1.add_patch(rectangle)
save_fig(fig, '.', 'hypervolume')
sns.despine()
plt.show()
sns.color_palette()[1]
###Output
_____no_output_____
|
notebooks/stationary1559.ipynb
|
###Markdown
TL;DR- EIP 1559 is a proposed improvement for the transaction fee market. It sets a variable "base" gasprice to be paid by the user and burned by the protocol, in addition to a "tip" paid by the user to the block producer.- The base price ("basefee") adjusts upwards when demand is high, and downwards otherwise.- We observe in this notebook that in a stationary environmnent, basefee converges to a value that prices out enough users to achieve the target block size.---We introduce here the building blocks of agent-based simulations of EIP1559. This follows an [earlier notebook](https://nbviewer.jupyter.org/github/ethereum/rig/blob/master/eip1559/eip1559.ipynb) that merely looked at the dynamics of the EIP 1559 mechanism. In the present notebook, agents decide on transactions based on the current basefee and form their transactions based on internal evaluations of their values and costs.[Huberman et al., 2019](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3025604) introduced such a model and framework for the Bitcoin payment system. We adapt it here to study the dynamics of the basefee.All the code is available in [this repo](https://github.com/barnabemonnot/abm1559), with some preliminary documentation [here](https://barnabemonnot.com/abm1559/build/html/). You can also download the [`abm1559` package from PyPi](https://pypi.org/project/abm1559/) and reproduce all the analysis here yourself! The broad linesWe have several entities. _Users_ come in randomly (following a Poisson process) and create and send transactions. The transactions are received by a _transaction pool_, from which the $x$ best _valid_ transactions are included in a _block_ created at fixed intervals. $x$ depends on how many valid transactions exist in the pool (e.g., how many post a gasprice exceeding the prevailing basefee in 1559 paradigm) and the block gas limit. Once transactions are included in the block, and the block is included in the _chain_, transactions are removed from the transaction pool.How do users set their parameters? Users have their own internal ways of evaluating their _costs_. Users obtain a certain _value_ from having their transaction included, which we call $v$. $v$ is different for every user. This value is fixed but their overall _payoff_ decreases the longer they wait to be included. Some users have higher time preferences than others, and their payoff decreases faster than others the longer they wait. Put together, we have the following:$$ \texttt{payoff} = \texttt{value} - \texttt{cost from waiting} - \texttt{transaction fee} $$Users expect to wait for a certain amount of time. In this essay, we set this to a fixed value -- somewhat arbitrarily we choose 5. This can be readily understood in the following way. Users estimate what their payoff will be from getting included 5 blocks from now, assuming basefee remains constant. If this payoff is negative, they decide not to send the transaction to the pool (in queuing terminology, they _balk_). We'll play with this assumption later.The scenario is set up this way to study _stationarity_: assuming some demand comes in from a fixed distribution at regular intervals, we must expect basefee to reach some stationary value and stay there. It is then reasonable for users, at this stationary point, to consider that 5 blocks from now basefee will still be at the same level. In the nonstationary case, when for instance a systemic change in the demand happens (e.g., the rate of Poisson arrivals increases), a user may want to hedge their bets by estimating their future payoffs in a different way, taking into account that basefee might increase instead. This strategy would probably be a good idea during the _transition_ phase, when basefee shifts from one stationary point to a new one.We make the assumption here that users choose their 1559 parameters based on their value alone. We set the transaction `max_fee` parameter to the value of the user and set the `gas_premium` parameter to a residual value -- 1 Gwei per unit of gas.There is no loss of generality in assuming all users send the same transaction in (e.g., a simple transfer) and so all transactions have the same `gas_used` value (21,000). In 1559 paradigm, with a 20M gas limit per block, this allows at most 952 transactions to be included, although the mechanism will target half of that, around 475 here. The protocol adjusts the basefee to apply economic pressure, towards a target gas usage of 10M per block. SimulationWe import a few classes from our `abm1559` package.
###Code
%config InlineBackend.figure_format = 'svg'
import os, sys
sys.path.insert(1, os.path.realpath(os.path.pardir))
# You may remove the two lines above if you have installed abm1559 from pypi
from abm1559.utils import constants
from abm1559.txpool import TxPool
from abm1559.users import User1559
from abm1559.userpool import UserPool
from abm1559.chain import (
Chain,
Block1559,
)
from abm1559.simulator import (
spawn_poisson_demand,
update_basefee,
)
import pandas as pd
###Output
_____no_output_____
###Markdown
And define the main function used to simulate the fee market.
###Code
def simulate(demand_scenario, UserClass):
# Instantiate a couple of things
txpool = TxPool()
basefee = constants["INITIAL_BASEFEE"]
chain = Chain()
metrics = []
user_pool = UserPool()
for t in range(len(demand_scenario)):
if t % 100 == 0: print(t)
# `env` is the "environment" of the simulation
env = {
"basefee": basefee,
"current_block": t,
}
# We return a demand drawn from a Poisson distribution.
# The parameter is given by `demand_scenario[t]`, and can vary
# over time.
users = spawn_poisson_demand(t, demand_scenario[t], UserClass)
# We query each new user with the current basefee value
# Users either return a transaction or None if they prefer to balk
decided_txs = user_pool.decide_transactions(users, env)
# New transactions are added to the transaction pool
txpool.add_txs(decided_txs)
# The best valid transactions are taken out of the pool for inclusion
selected_txs = txpool.select_transactions(env)
txpool.remove_txs([tx.tx_hash for tx in selected_txs])
# We create a block with these transactions
block = Block1559(txs = selected_txs, parent_hash = chain.current_head, height = t, basefee = basefee)
# The block is added to the chain
chain.add_block(block)
# A couple of metrics we will use to monitor the simulation
row_metrics = {
"block": t,
"basefee": basefee / (10 ** 9),
"users": len(users),
"decided_txs": len(decided_txs),
"included_txs": len(selected_txs),
"blk_avg_gas_price": block.average_gas_price(),
"blk_avg_tip": block.average_tip(),
"pool_length": txpool.pool_length(),
}
metrics.append(row_metrics)
# Finally, basefee is updated and a new round starts
basefee = update_basefee(block, basefee)
return (pd.DataFrame(metrics), user_pool, chain)
###Output
_____no_output_____
###Markdown
As you can see, `simulate` takes in a `demand_scenario` array. Earlier we mentioned that each round, we draw the number of users wishing to send transactions from a Poisson distribution. [This distribution is parameterised by the expected number of arrivals, called _lambda_ $\lambda$](https://en.wikipedia.org/wiki/Poisson_distribution). The `demand_scenario` array contains a sequence of such lambda's. We also provide in `UserClass` the type of user we would like to model (see the [docs](http://barnabemonnot.com/abm1559/build/html/users) for more details).Our users draw their _value_ for the transaction (per unit of gas) from a uniform distribution, picking a random number between 0 and 20 (Gwei). Their cost for waiting one extra unit of time is drawn from a uniform distribution too, this time between 0 and 1 (Gwei). The closer their cost is to 1, the more impatient users are.Say for instance that I value each unit of gas at 15 Gwei, and my cost per round is 0.5 Gwei. If I wait for 6 blocks to be included at a gas price of 10 Gwei, my payoff is $15 - 6 \times 0.5 - 10 = 2$.The numbers above sound arbitrary, and in a sense they are! They were chosen to respect the scales we are used to ([although gas prices are closer to 100 Gweis these days...](https://ethereum.github.io/rig/ethdata/notebooks/gas_weather_reports/exploreJuly21.html)). It also turns out that any distribution (uniform, Pareto, whatever floats your boat) leads to stationarity. The important part is that _some_ users have positive value for transacting in the first place, enough to fill a block to its target size at least. The choice of sample the cost from a uniform distribution, as opposed to having all users experience the same cost per round, allows for **simulating a scenario where some users are more in a hurry than others**.
###Code
demand_scenario = [2000 for i in range(200)]
(df, user_pool, chain) = simulate(demand_scenario, User1559)
###Output
0
100
###Markdown
To study the stationary case, we create an array repeating $\lambda$ for as many blocks as we wish to simulate the market for. We set $\lambda$ to spawn on average 2000 users between two blocks. ResultsLet's print the head and tail of the data frame holding our metrics. Each row corresponds to one round of our simulation, so one block.
###Code
df
###Output
_____no_output_____
###Markdown
At the start of the simulation we clearly see in column `users` a demand close to 2000 users per round. Among these 2000 or so, around 1500 decide to send their transaction in (`decided_txs`). The 500 who don't might have a low value or high per-round costs, meaning it is unprofitable for them to even send their transaction in. Eventually 952 of them are included (`included_txs`), maxing out the block gas limit. The basefee starts at 1 Gwei but steadily increases from there, reaching around 11.8 Gwei by the end.By the end of the simulation, we note that `decided_txs` is always equal to `included_txs`. By this point, the basefee has risen enough to make it unprofitable for most users to send their transactions. This is exactly what we want! Users balk at the current prices.In the next chart we show the evolution of basefee and tips. We define _tip_ as the gas price minus the basefee, which is what _miners_ receive from the transaction.Note that [tip is in general **not** equal to the gas premium](https://twitter.com/barnabemonnot/status/1284271520311848960) that users set. This is particularly true when basefee plus gas premium exceeds the max fee of the user. In the graph below, the tip hovers around 1 Gwei (the premium), but is sometimes less than 1 too, especially when users see the prevailing basefee approach their posted max fees.
###Code
df.plot("block", ["basefee", "blk_avg_tip"])
###Output
_____no_output_____
###Markdown
Notice the increase at the beginning followed by a short drop? At the very beginning, the pool fills up quickly with many users hopeful to get their transactions in with a positive resulting payoff. The basefee increases until users start balking **and** the pool is exhausted. Once exhausted, basefee starts decreasing again to settle at the stationary point where the pool only includes transactions that are invalid given the stationary basefee.We can see the pool length becoming stationary in the next plot, showing the length of the pool over time.
###Code
df.plot("block", "pool_length")
###Output
_____no_output_____
###Markdown
The remaining transactions are likely from early users who did not balk even though basefee was increasing, and who were quickly outbid by others. Demand shockWe look at a stationary setting, where the new demand coming in each new round follows a fixed expected rate of arrival. Demand shocks may be of two kinds:- Same number of users, different values for transactions and costs for waiting.- Increased number of users, same values and costs.We'll consider the second scenario here, simply running the simulation again and increasing the $\lambda$ parameter of our Poisson arrival process suddenly, from expecting 2000, to expecting 6000 users per round.
###Code
demand_scenario = [2000 for i in range(100)] + [6000 for i in range(100)]
(df_jump, user_pool_jump, chain_jump) = simulate(demand_scenario, User1559)
###Output
0
100
###Markdown
The next plot shows the number of new users each round. We note at block 100 a sudden jump from around 2000 new users to 6000.
###Code
df_jump.plot("block", "users")
df_jump.plot("block", ["basefee", "blk_avg_tip"])
###Output
_____no_output_____
###Markdown
We see a jump around block 100, when the arrival rate of users switches from 2000 to 6000. The basefee increases in response. With a block limit of 20M gas, about 950 transactions fit into each block. Targeting half of this value, the basefee increases until more or less 475 transactions are included in each block.Since our users' values and costs are always drawn from the same distribution, when 2000 users show up, we expect to let in about 25% of them (~ 475 / 2000), the 25% with greatest expected payoff. When 6000 users come in, we now only expect the "richest" 8% (~ 475 / 6000) to get in, so we "raise the bar" for the basefee, since we need to discriminate more.
###Code
df_jump.plot("block", ["pool_length", "users", "decided_txs", "included_txs"])
###Output
_____no_output_____
###Markdown
As we see with the graph above, for a short while after block 100, blocks include more than the usual ~475 transactions. This is the transition between the old and the new stationary points.Since we have a lot more new users each round, more of them are willing and able to pay for their transactions above the current basefee, and so get included. This keeps happening until the basefee reaches a new stationary level. Changing expected timeUp until now, users decided whether to join the transaction pool or not based on the expectation that they would be included at least 5 blocks after they join. They evaluated their payoff assuming that basefee did not change (due to stationarity) for these 5 blocks. If their value for transacting minus the cost of waiting for 5 blocks minus the cost of transacting is positive, they sent their transactions in!$$ \texttt{payoff} = \texttt{value} - \texttt{cost from waiting 5 blocks} - \texttt{transaction fee} > 0 $$Under a stationary demand however, users can expect to be included in the next block. So let's have user expect to be included in the next block, right after their appearance, and see what happens. We do this by subclassing our `User1559` agent and overriding its `expected_time` method.
###Code
class OptimisticUser(User1559):
def expected_time(self, env):
return 0
demand_scenario = [2000 for i in range(100)] + [6000 for i in range(100)]
(df_opti, user_pool_opti, chain_opti) = simulate(demand_scenario, OptimisticUser)
df_opti.plot("block", ["basefee", "blk_avg_tip"])
###Output
_____no_output_____
###Markdown
The plot looks the same as before. But let's look at the average basefee for the last 50 blocks in this scenario and the last.
###Code
df_opti[(df.block > 150)][["basefee"]].mean()
df_jump[(df.block > 150)][["basefee"]].mean()
###Output
_____no_output_____
###Markdown
When users expect to be included in the next block rather than wait for at least 5, the basefee increases! This makes sense if we come back to our payoff definition:$$ \texttt{payoff} = \texttt{value} - \texttt{cost from waiting} - \texttt{transaction fee} $$The estimated cost for waiting is lower now since users estimate they'll be included in the next block and not wait 5 blocks to get in. Previously, some users with high values but high time preferences might have been discouraged to join the pool. Now these users don't expect to wait as much, and since their values are high, they don't mind bidding for a higher basefee either. We can check indeed that on average, users included in this last scenario have higher values than users included in the previous one.To do so, we export to pandas `DataFrame`s the user pool (to obtain their values and costs) and the chain (to obtain the addresses of included users in the last 50 blocks).
###Code
user_pool_opti_df = user_pool_opti.export().rename(columns={ "pub_key": "sender" })
chain_opti_df = chain_opti.export()
###Output
_____no_output_____
###Markdown
Let's open these up and have a look at the data. `user_pool_opti_df` registers all users we spawned in our simulation.
###Code
user_pool_opti_df.tail()
###Output
_____no_output_____
###Markdown
Meanwhile, `chain_opti_df` lists all the transactions included in the chain.
###Code
chain_opti_df.tail()
###Output
_____no_output_____
###Markdown
With a simple join on the `sender` column we can associate each user with their included transaction. We look at the average value of included users after the second stationary point.
###Code
chain_opti_df[(chain_opti_df.block_height >= 150)].join(
user_pool_opti_df.set_index("sender"), on="sender"
)[["value"]].mean()
###Output
_____no_output_____
###Markdown
When users expect to be included at least one block after they send their transaction, the average value of included users is around 19.2 Gwei.
###Code
user_pool_jump_df = user_pool_jump.export().rename(columns={ "pub_key": "sender" })
chain_jump_df = chain_jump.export()
chain_jump_df[(chain_jump_df.block_height >= 150)].join(
user_pool_jump_df.set_index("sender"), on="sender"
)[["value"]].mean()
###Output
_____no_output_____
###Markdown
But when users expect to be included at least _five_ blocks after, the average value of included users is around 18.7 Gwei, confirming that when users expect next block inclusion, higher value users get in and raise the basefee in the process. ConclusionWe've looked at 1559 when users with their own values and costs decide whether to join the pool or not based on the current basefee level. These users estimate their ultimate payoff by assuming _stationarity_: the demand between rounds follows the same arrival process and the same distribution of values and costs. In this stationary environment, basefee settles on some value and mostly stays there, allowing users to estimate their payoff should they wait for five or one blocks to be included.We've again left aside some important questions. Here all users simply leave a 1 Gwei premium in their transactions. In reality, we should expect users to attempt to "game" the system by leaving higher tips to get in first. We can suppose that in a stationary environment, "gaming" is only possible until basefee reaches its stationary point (during the transition period) and exhausts the feasible demand. We will leave this question for another notebook.(Temporary) non-stationarity is more interesting. The [5% meme](https://insights.deribit.com/market-research/analysis-of-eip-2593-escalator/) during which sudden demand shocks precipitate a large influx of new, high-valued transactions should also see users try to outcompete each other based on premiums alone, until basefee catches up. The question of whether 1559 offers anything in this case or whether the whole situation would look like a first price auction may be better settled empirically, but we can intuit that 1559 would smooth the process slightly by [offering a (laggy) price oracle](https://twitter.com/onurhsolmaz/status/1286068365812011009).And then we have the question of miner collusion, which rightfully agitates a lot of the ongoing conversation. In the simulations we do here, we instantiated one transaction pool only, which should tell you that we are looking at a "centralised", honest miner that includes transactions as much as possible, and not a collection or a cartel of miners cooperating. We can of course weaken this assumption and have several mining pools with their own behaviours and payoff evaluations, much like we modelled our users. We still would like to have a good theoretical understanding of the risks and applicability of miner collusion strategies. Onward!--- (Bonus) Ex post individual rationality_Individual rationality_ is the idea that agents won't join a mechanism unless they hope to make some positive payoff out of it. I'd rather not transact if my value for transacting minus my costs is negative.In general, we like this property and we want to make the mechanism individually rational to as many agents as possible. Yet, some mechanisms fail to satisfy _ex post_ individual rationality: I might _expect_ to make a positive payoff from the mechanism, but some _realisation_ of the mechanism exists where my payoff is negative.Take an auction. As long as my bid is lower or equal to my value for the auctioned item, the mechanism is ex post individually rational for me: I can never "overpay". If I value the item for 10 ETH and decide to bid 11 ETH, in a first-price auction where I pay for my bid if I have the highest, there is a realisation of the mechanism where I am the winner and I am asked to pay 11 ETH. My payoff is -1 ETH then.In the transaction fee market, ex post individual rationality is not guaranteed unless I can cancel my transaction. In the simulations here, we do not offer this option to our agents. They expect to wait for inclusion for a certain amount of blocks, and evaluate whether their payoff after that wait is positive or not to decide whether to send their transaction or not. However, some agents might wait longer than their initial estimation, in particular before the mechanism reaches stationarity. Some realisations of the mechanism then yield a negative payoff for these agents, and the mechanism is not ex post individually rational.Let's look at the agents' payoff using the transcript of transactions included in the chain. For each transaction, we want to find out what was the ultimate payoff for the agent who sent it in. If the transaction was included much later than the agent's initial estimation, this payoff is negative, and the mechanism wasn't ex post individually rational to them.
###Code
user_pool_df = user_pool.export().rename(columns={ "pub_key": "sender" })
chain_df = chain.export()
user_txs_df = chain_df.join(user_pool_df.set_index("sender"), on="sender")
###Output
_____no_output_____
###Markdown
In the next chunk we obtain the users' payoffs: their value minus the costs incurred from the transaction fee and the time they waited.
###Code
user_txs_df["payoff"] = user_txs_df.apply(
lambda row: row.user.payoff({
"current_block": row.block_height,
"gas_price": row.tx.gas_price({
"basefee": row.basefee * (10 ** 9) # we need basefee in wei
})
}) / (10 ** 9), # put payoff is in Gwei
axis = 1
)
user_txs_df["epir"] = user_txs_df.payoff.apply(
lambda payoff: payoff >= 0
)
###Output
_____no_output_____
###Markdown
Now we count the fraction of users in each block who received a positive payoff.
###Code
epir_df = pd.concat([
user_txs_df[["block_height", "tx_hash"]].groupby(["block_height"]).agg(["count"]),
user_txs_df[["block_height", "epir"]][user_txs_df.epir == True].groupby(["block_height"]).agg(["count"])
], axis = 1)
epir_df["percent_epir"] = epir_df.apply(
lambda row: row.epir / row.tx_hash * 100,
axis = 1
)
###Output
_____no_output_____
###Markdown
Let's plot it!
###Code
epir_df.reset_index().plot("block_height", ["percent_epir"])
###Output
_____no_output_____
###Markdown
At the very beginning, all users (100%) have positive payoff. They have only waited for 1 block to get included. This percentage steadily drops, as basefee increases: some high value users waiting in the pool get included much later than they expected, netting a negative payoff.Once we pass the initial instability (while basefee is looking for its stationary value), all users receive a positive payoff. This is somewhat expected: once basefee has increased enough to weed out excess demand, users are pretty much guaranteed to be included in the next block, and so the realised waiting time will always be less than their estimate. ---_Check out also:_ A recent [ethresear.ch post](https://ethresear.ch/t/a-mechanism-for-daily-autonomous-gas-price-stabilization/7762) by [Onur Solmaz](https://twitter.com/onurhsolmaz), on a 1559-inspired mechanism for daily gas price stabilization, with simulations.
###Code
Stationary behaviour of EIP 1559 agent-based model
// References + footnotes
// Authors
let authorData = ["barnabe"];
Many thanks to Sacha for his comments, edits and corrections (all errors remain mine); Dan Finlay for prompting a live discussion of this notebook in a recent call.
###Output
_____no_output_____
###Markdown
Stationary behaviour of EIP 1559 agent-based model July 2020, [@barnabemonnot](https://twitter.com/barnabemonnot) [Robust Incentives Group](https://github.com/ethereum/rig), Ethereum Foundation---We introduce here the building blocks of agent-based simulations of EIP1559. This follows an [earlier notebook](https://nbviewer.jupyter.org/github/ethereum/rig/blob/master/eip1559/eip1559.ipynb) that merely looked at the dynamics of the EIP 1559 mechanism. In the present notebook, agents decide on transactions based on the current basefee and form their transactions based on internal evaluations of their values and costs.[Huberman et al., 2019](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3025604) introduced such a model and framework for the Bitcoin payment system. We adapt it here to study the dynamics of the basefee.All the code is available in [this repo](https://github.com/barnabemonnot/abm1559), with some preliminary documentation [here](https://barnabemonnot.com/abm1559/build/html/). You can also download the [`abm1559` package from PyPi](https://pypi.org/project/abm1559/) and reproduce all the analysis here yourself! The broad linesWe have several entities. _Users_ come in randomly (following a Poisson process) and create and send transactions. The transactions are received by a _transaction pool_, from which the $x$ best _valid_ transactions are included in a _block_ created at fixed intervals. $x$ depends on how many valid transactions exist in the pool (e.g., how many post a gasprice exceeding the prevailing basefee in 1559 paradigm) and the block gas limit. Once transactions are included in the block, and the block is included in the _chain_, transactions are removed from the transaction pool.How do users set their parameters? Users have their own internal ways of evaluating their _costs_. Users obtain a certain _value_ from having their transaction included, which we call $v$. $v$ is different for every user. This value is fixed but their overall _payoff_ decreases the longer they wait to be included. Some users have higher time preferences than others, and their payoff decreases faster than others the longer they wait. Put together, we have the following:$$ \texttt{payoff} = \texttt{value} - \texttt{cost from waiting} - \texttt{transaction fee} $$Users expect to wait for a certain amount of time. In this essay, we set this to a fixed value -- somewhat arbitrarily we choose 5. This can be readily understood in the following way. Users estimate what their payoff will be from getting included 5 blocks from now, assuming basefee remains constant. If this payoff is negative, they decide not to send the transaction to the pool (in queuing terminology, they _balk_). We'll play with this assumption later.The scenario is set up this way to study _stationarity_: assuming some demand comes in from a fixed distribution at regular intervals, we must expect basefee to reach some stationary value and stay there. It is then reasonable for users, at this stationary point, to consider that 5 blocks from now basefee will still be at the same level. In the nonstationary case, when for instance a systemic change in the demand happens (e.g., the rate of Poisson arrivals increases), a user may want to hedge their bets by estimating their future payoffs in a different way, taking into account that basefee might increase instead. This strategy would probably be a good idea during the _transition_ phase, when basefee shifts from one stationary point to a new one.We make the assumption here that users choose their 1559 parameters based on their value alone. We set the transaction `max_fee` parameter to the value of the user and set the `gas_premium` parameter to a residual value -- 1 Gwei per unit of gas.There is no loss of generality in assuming all users send the same transaction in (e.g., a simple transfer) and so all transactions have the same `gas_used` value (21,000). In 1559 paradigm, with a 20M gas limit per block, this allows at most 952 transactions to be included, although the mechanism will target half of that, around 475 here. The protocol adjusts the basefee to apply economic pressure, towards a target gas usage of 10M per block. SimulationWe import a few classes from our `abm1559` package.
###Code
import os, sys
sys.path.insert(1, os.path.realpath(os.path.pardir))
# You may remove the two lines above if you have installed abm1559 from pypi
from abm1559.utils import constants
from abm1559.txpool import TxPool
from abm1559.users import User1559
from abm1559.userpool import UserPool
from abm1559.chain import (
Chain,
Block1559,
)
from abm1559.simulator import (
spawn_poisson_demand,
update_basefee,
)
import pandas as pd
###Output
_____no_output_____
###Markdown
And define the main function used to simulate the fee market.
###Code
def simulate(demand_scenario, UserClass):
# Instantiate a couple of things
txpool = TxPool()
basefee = constants["INITIAL_BASEFEE"]
chain = Chain()
metrics = []
user_pool = UserPool()
for t in range(len(demand_scenario)):
if t % 100 == 0: print(t)
# `params` are the "environment" of the simulation
params = {
"basefee": basefee,
"current_block": t,
}
# We return a demand drawn from a Poisson distribution.
# The parameter is given by `demand_scenario[t]`, and can vary
# over time.
users = spawn_poisson_demand(t, demand_scenario[t], UserClass)
# We query each new user with the current basefee value
# Users either return a transaction or None if they prefer to balk
decided_txs = user_pool.decide_transactions(users, params)
# New transactions are added to the transaction pool
txpool.add_txs(decided_txs)
# The best valid transactions are taken out of the pool for inclusion
selected_txs = txpool.select_transactions(params)
txpool.remove_txs([tx.tx_hash for tx in selected_txs])
# We create a block with these transactions
block = Block1559(txs = selected_txs, parent_hash = chain.current_head, height = t, basefee = basefee)
# The block is added to the chain
chain.add_block(block)
# A couple of metrics we will use to monitor the simulation
row_metrics = {
"block": t,
"basefee": basefee / (10 ** 9),
"users": len(users),
"decided_txs": len(decided_txs),
"included_txs": len(selected_txs),
"blk_avg_gas_price": block.average_gas_price(),
"blk_avg_tip": block.average_tip(),
"pool_length": txpool.pool_length,
}
metrics.append(row_metrics)
# Finally, basefee is updated and a new round starts
basefee = update_basefee(block, basefee)
return (pd.DataFrame(metrics), user_pool, chain)
###Output
_____no_output_____
###Markdown
As you can see, `simulate` takes in a `demand_scenario` array. Earlier we mentioned that each round, we draw the number of users wishing to send transactions from a Poisson distribution. [This distribution is parameterised by the expected number of arrivals, called _lambda_ $\lambda$](https://en.wikipedia.org/wiki/Poisson_distribution). The `demand_scenario` array contains a sequence of such lambda's. We also provide in `UserClass` the type of user we would like to model (see the [docs](http://barnabemonnot.com/abm1559/build/html/users) for more details).Our users draw their _value_ for the transaction (per unit of gas) from a uniform distribution, picking a random number between 0 and 20 (Gwei). Their cost for waiting one extra unit of time is drawn from a uniform distribution too, this time between 0 and 1 (Gwei). The closer their cost is to 1, the more impatient users are.Say for instance that I value each unit of gas at 15 Gwei, and my cost per round is 0.5 Gwei. If I wait for 6 blocks to be included at a gas price of 10 Gwei, my payoff is $15 - 6 \times 0.5 - 10 = 2$.The numbers above sound arbitrary, and in a sense they are! They were chosen to respect the scales we are used to ([although gas prices are closer to 100 Gweis these days...](https://ethereum.github.io/rig/ethdata/notebooks/gas_weather_reports/exploreJuly21.html)). It also turns out that any distribution (uniform, Pareto, whatever floats your boat) leads to stationarity. The important part is that _some_ users have positive value for transacting in the first place, enough to fill a block to its target size at least. The choice of sample the cost from a uniform distribution, as opposed to having all users experience the same cost per round, allows for **simulating a scenario where some users are more in a hurry than others**.
###Code
demand_scenario = [2000 for i in range(200)]
(df, user_pool, chain) = simulate(demand_scenario, User1559)
###Output
0
100
###Markdown
To study the stationary case, we create an array repeating $\lambda$ for as many blocks as we wish to simulate the market for. We set $\lambda$ to spawn on average 2000 users between two blocks. ResultsLet's print the head and tail of the data frame holding our metrics. Each row corresponds to one round of our simulation, so one block.
###Code
df
###Output
_____no_output_____
###Markdown
At the start of the simulation we clearly see in column `users` a demand close to 2000 users per round. Among these 2000 or so, around 1500 decide to send their transaction in (`decided_txs`). The 500 who don't might have a low value or high per-round costs, meaning it is unprofitable for them to even send their transaction in. Eventually 952 of them are included (`included_txs`), maxing out the block gas limit. The basefee starts at 1 Gwei but steadily increases from there, reaching around 11.8 Gwei by the end.By the end of the simulation, we note that `decided_txs` is always equal to `included_txs`. By this point, the basefee has risen enough to make it unprofitable for most users to send their transactions. This is exactly what we want! Users balk at the current prices.In the next chart we show the evolution of basefee and tips. We define _tip_ as the gas price minus the basefee, which is what _miners_ receive from the transaction.Note that [tip is in general **not** equal to the gas premium](https://twitter.com/barnabemonnot/status/1284271520311848960) that users set. This is particularly true when basefee plus gas premium exceeds the max fee of the user. In the graph below, the tip hovers around 1 Gwei (the premium), but is sometimes less than 1 too, especially when users see the prevailing basefee approach their posted max fees.
###Code
df.plot("block", ["basefee", "blk_avg_tip"])
###Output
_____no_output_____
###Markdown
Notice the increase at the beginning followed by a short drop? At the very beginning, the pool fills up quickly with many users hopeful to get their transactions in with a positive resulting payoff. The basefee increases until users start balking **and** the pool is exhausted. Once exhausted, basefee starts decreasing again to settle at the stationary point where the pool only includes transactions that are invalid given the stationary basefee.We can see the pool length becoming stationary in the next plot, showing the length of the pool over time.
###Code
df.plot("block", "pool_length")
###Output
_____no_output_____
###Markdown
The remaining transactions are likely from early users who did not balk even though basefee was increasing, and who were quickly outbid by others. Demand shockWe look at a stationary setting, where the new demand coming in each new round follows a fixed expected rate of arrival. Demand shocks may be of two kinds:- Same number of users, different values for transactions and costs for waiting.- Increased number of users, same values and costs.We'll consider the second scenario here, simply running the simulation again and increasing the $\lambda$ parameter of our Poisson arrival process suddenly, from expecting 2000, to expecting 6000 users per round.
###Code
demand_scenario = [2000 for i in range(100)] + [6000 for i in range(100)]
(df_jump, user_pool_jump, chain_jump) = simulate(demand_scenario, User1559)
###Output
0
100
###Markdown
The next plot shows the number of new users each round. We note at block 100 a sudden jump from around 2000 new users to 6000.
###Code
df_jump.plot("block", "users")
df_jump.plot("block", ["basefee", "blk_avg_tip"])
###Output
_____no_output_____
###Markdown
We see a jump around block 100, when the arrival rate of users switches from 2000 to 6000. The basefee increases in response. With a block limit of 20M gas, about 950 transactions fit into each block. Targeting half of this value, the basefee increases until more or less 475 transactions are included in each block.Since our users' values and costs are always drawn from the same distribution, when 2000 users show up, we expect to let in about 25% of them (~ 475 / 2000), the 25% with greatest expected payoff. When 6000 users come in, we now only expect the "richest" 8% (~ 475 / 6000) to get in, so we "raise the bar" for the basefee, since we need to discriminate more.
###Code
df_jump.plot("block", ["pool_length", "users", "decided_txs", "included_txs"])
###Output
_____no_output_____
###Markdown
As we see with the graph above, for a short while after block 100, blocks include more than the usual ~475 transactions. This is the transition between the old and the new stationary points.Since we have a lot more new users each round, more of them are willing and able to pay for their transactions above the current basefee, and so get included. This keeps happening until the basefee reaches a new stationary level. Changing expected timeUp until now, users decided whether to join the transaction pool or not based on the expectation that they would be included at least 5 blocks after they join. They evaluated their payoff assuming that basefee did not change (due to stationarity) for these 5 blocks. If their value for transacting minus the cost of waiting for 5 blocks minus the cost of transacting is positive, they sent their transactions in!$$ \texttt{payoff} = \texttt{value} - \texttt{cost from waiting 5 blocks} - \texttt{transaction fee} > 0 $$Under a stationary demand however, users can expect to be included in the next block. So let's have user expect to be included in the next block, right after their appearance, and see what happens. We do this by subclassing our `User1559` agent and overriding its `expected_time` method.
###Code
class OptimisticUser(User1559):
def expected_time(self, params):
return 0
demand_scenario = [2000 for i in range(100)] + [6000 for i in range(100)]
(df_opti, user_pool_opti, chain_opti) = simulate(demand_scenario, OptimisticUser)
df_opti.plot("block", ["basefee", "blk_avg_tip"])
###Output
_____no_output_____
###Markdown
The plot looks the same as before. But let's look at the average basefee for the last 50 blocks in this scenario and the last.
###Code
df_opti[(df.block > 150)][["basefee"]].mean()
df_jump[(df.block > 150)][["basefee"]].mean()
###Output
_____no_output_____
###Markdown
When users expect to be included in the next block rather than wait for at least 5, the basefee increases! This makes sense if we come back to our payoff definition:$$ \texttt{payoff} = \texttt{value} - \texttt{cost from waiting} - \texttt{transaction fee} $$The estimated cost for waiting is lower now since users estimate they'll be included in the next block and not wait 5 blocks to get in. Previously, some users with high values but high time preferences might have been discouraged to join the pool. Now these users don't expect to wait as much, and since their values are high, they don't mind bidding for a higher basefee either. We can check indeed that on average, users included in this last scenario have higher values than users included in the previous one.To do so, we export to pandas `DataFrame`s the user pool (to obtain their values and costs) and the chain (to obtain the addresses of included users in the last 50 blocks).
###Code
user_pool_opti_df = user_pool_opti.export().rename(columns={ "pub_key": "sender" })
chain_opti_df = chain_opti.export()
###Output
_____no_output_____
###Markdown
Let's open these up and have a look at the data. `user_pool_opti_df` registers all users we spawned in our simulation.
###Code
user_pool_opti_df.tail()
###Output
_____no_output_____
###Markdown
Meanwhile, `chain_opti_df` lists all the transactions included in the chain.
###Code
chain_opti_df.tail()
###Output
_____no_output_____
###Markdown
With a simple join on the `sender` column we can associate each user with their included transaction. We look at the average value of included users after the second stationary point.
###Code
chain_opti_df[(chain_opti_df.block_height >= 150)].join(
user_pool_opti_df.set_index("sender"), on="sender"
)[["value"]].mean()
###Output
_____no_output_____
###Markdown
When users expect to be included at least one block after they send their transaction, the average value of included users is around 19.2 Gwei.
###Code
user_pool_jump_df = user_pool_jump.export().rename(columns={ "pub_key": "sender" })
chain_jump_df = chain_jump.export()
chain_jump_df[(chain_jump_df.block_height >= 150)].join(
user_pool_jump_df.set_index("sender"), on="sender"
)[["value"]].mean()
###Output
_____no_output_____
###Markdown
But when users expect to be included at least _five_ blocks after, the average value of included users is around 18.7 Gwei, confirming that when users expect next block inclusion, higher value users get in and raise the basefee in the process. ConclusionWe've looked at 1559 when users with their own values and costs decide whether to join the pool or not based on the current basefee level. These users estimate their ultimate payoff by assuming _stationarity_: the demand between rounds follows the same arrival process and the same distribution of values and costs. In this stationary environment, basefee settles on some value and mostly stays there, allowing users to estimate their payoff should they wait for five or one blocks to be included.We've again left aside some important questions. Here all users simply leave a 1 Gwei premium in their transactions. In reality, we should expect users to attempt to "game" the system by leaving higher tips to get in first. We can suppose that in a stationary environment, "gaming" is only possible until basefee reaches its stationary point (during the transition period) and exhausts the feasible demand. We will leave this question for another notebook.(Temporary) non-stationarity is more interesting. The [5% meme](https://insights.deribit.com/market-research/analysis-of-eip-2593-escalator/) during which sudden demand shocks precipitate a large influx of new, high-valued transactions should also see users try to outcompete each other based on premiums alone, until basefee catches up. The question of whether 1559 offers anything in this case or whether the whole situation would look like a first price auction may be better settled empirically, but we can intuit that 1559 would smooth the process slightly by [offering a (laggy) price oracle](https://twitter.com/onurhsolmaz/status/1286068365812011009).And then we have the question of miner collusion, which rightfully agitates a lot of the ongoing conversation. In the simulations we do here, we instantiated one transaction pool only, which should tell you that we are looking at a "centralised", honest miner that includes transactions as much as possible, and not a collection or a cartel of miners cooperating. We can of course weaken this assumption and have several mining pools with their own behaviours and payoff evaluations, much like we modelled our users. We still would like to have a good theoretical understanding of the risks and applicability of miner collusion strategies. Onward!--- (Bonus) Ex post individual rationality_Individual rationality_ is the idea that agents won't join a mechanism unless they hope to make some positive payoff out of it. I'd rather not transact if my value for transacting minus my costs is negative.In general, we like this property and we want to make the mechanism individually rational to as many agents as possible. Yet, some mechanisms fail to satisfy _ex post_ individual rationality: I might _expect_ to make a positive payoff from the mechanism, but some _realisation_ of the mechanism exists where my payoff is negative.Take an auction. As long as my bid is lower or equal to my value for the auctioned item, the mechanism is ex post individually rational for me: I can never "overpay". If I value the item for 10 ETH and decide to bid 11 ETH, in a first-price auction where I pay for my bid if I have the highest, there is a realisation of the mechanism where I am the winner and I am asked to pay 11 ETH. My payoff is -1 ETH then.In the transaction fee market, ex post individual rationality is not guaranteed unless I can cancel my transaction. In the simulations here, we do not offer this option to our agents. They expect to wait for inclusion for a certain amount of blocks, and evaluate whether their payoff after that wait is positive or not to decide whether to send their transaction or not. However, some agents might wait longer than their initial estimation, in particular before the mechanism reaches stationarity. Some realisations of the mechanism then yield a negative payoff for these agents, and the mechanism is not ex post individually rational.Let's look at the agents' payoff using the transcript of transactions included in the chain. For each transaction, we want to find out what was the ultimate payoff for the agent who sent it in. If the transaction was included much later than the agent's initial estimation, this payoff is negative, and the mechanism wasn't ex post individually rational to them.
###Code
user_pool_df = user_pool.export().rename(columns={ "pub_key": "sender" })
chain_df = chain.export()
user_txs_df = chain_df.join(user_pool_df.set_index("sender"), on="sender")
###Output
_____no_output_____
###Markdown
In the next chunk we obtain the users' payoffs: their value minus the costs incurred from the transaction fee and the time they waited.
###Code
user_txs_df["payoff"] = user_txs_df.apply(
lambda row: row.user.payoff({
"current_block": row.block_height,
"gas_price": row.tx.gas_price({
"basefee": row.basefee * (10 ** 9) # we need basefee in wei
})
}) / (10 ** 9), # put payoff is in Gwei
axis = 1
)
user_txs_df["epir"] = user_txs_df.payoff.apply(
lambda payoff: payoff >= 0
)
###Output
_____no_output_____
###Markdown
Now we count the fraction of users in each block who received a positive payoff.
###Code
epir_df = pd.concat([
user_txs_df[["block_height", "tx_hash"]].groupby(["block_height"]).agg(["count"]),
user_txs_df[["block_height", "epir"]][user_txs_df.epir == True].groupby(["block_height"]).agg(["count"])
], axis = 1)
epir_df["percent_epir"] = epir_df.apply(
lambda row: row.epir / row.tx_hash * 100,
axis = 1
)
###Output
_____no_output_____
###Markdown
Let's plot it!
###Code
epir_df.reset_index().plot("block_height", ["percent_epir"])
###Output
_____no_output_____
###Markdown
TL;DR- EIP 1559 is a proposed improvement for the transaction fee market. It sets a variable "base" gasprice to be paid by the user and burned by the protocol, in addition to a "tip" paid by the user to the block producer.- The base price ("basefee") adjusts upwards when demand is high, and downwards otherwise.- We observe in this notebook that in a stationary environmnent, basefee converges to a value that prices out enough users to achieve the target block size.---We introduce here the building blocks of agent-based simulations of EIP1559. This follows an [earlier notebook](https://nbviewer.jupyter.org/github/ethereum/rig/blob/master/eip1559/eip1559.ipynb) that merely looked at the dynamics of the EIP 1559 mechanism. In the present notebook, agents decide on transactions based on the current basefee and form their transactions based on internal evaluations of their values and costs.[Huberman et al., 2019](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3025604) introduced such a model and framework for the Bitcoin payment system. We adapt it here to study the dynamics of the basefee.All the code is available in [this repo](https://github.com/barnabemonnot/abm1559), with some preliminary documentation [here](https://barnabemonnot.com/abm1559/build/html/). You can also download the [`abm1559` package from PyPi](https://pypi.org/project/abm1559/) and reproduce all the analysis here yourself! The broad linesWe have several entities. _Users_ come in randomly (following a Poisson process) and create and send transactions. The transactions are received by a _transaction pool_, from which the $x$ best _valid_ transactions are included in a _block_ created at fixed intervals. $x$ depends on how many valid transactions exist in the pool (e.g., how many post a gasprice exceeding the prevailing basefee in 1559 paradigm) and the block gas limit. Once transactions are included in the block, and the block is included in the _chain_, transactions are removed from the transaction pool.How do users set their parameters? Users have their own internal ways of evaluating their _costs_. Users obtain a certain _value_ from having their transaction included, which we call $v$. $v$ is different for every user. This value is fixed but their overall _payoff_ decreases the longer they wait to be included. Some users have higher time preferences than others, and their payoff decreases faster than others the longer they wait. Put together, we have the following:$$ \texttt{payoff} = \texttt{value} - \texttt{cost from waiting} - \texttt{transaction fee} $$Users expect to wait for a certain amount of time. In this essay, we set this to a fixed value -- somewhat arbitrarily we choose 5. This can be readily understood in the following way. Users estimate what their payoff will be from getting included 5 blocks from now, assuming basefee remains constant. If this payoff is negative, they decide not to send the transaction to the pool (in queuing terminology, they _balk_). We'll play with this assumption later.The scenario is set up this way to study _stationarity_: assuming some demand comes in from a fixed distribution at regular intervals, we must expect basefee to reach some stationary value and stay there. It is then reasonable for users, at this stationary point, to consider that 5 blocks from now basefee will still be at the same level. In the nonstationary case, when for instance a systemic change in the demand happens (e.g., the rate of Poisson arrivals increases), a user may want to hedge their bets by estimating their future payoffs in a different way, taking into account that basefee might increase instead. This strategy would probably be a good idea during the _transition_ phase, when basefee shifts from one stationary point to a new one.We make the assumption here that users choose their 1559 parameters based on their value alone. We set the transaction `max_fee` parameter to the value of the user and set the `gas_premium` parameter to a residual value -- 1 Gwei per unit of gas.There is no loss of generality in assuming all users send the same transaction in (e.g., a simple transfer) and so all transactions have the same `gas_used` value (21,000). In 1559 paradigm, with a 20M gas limit per block, this allows at most 952 transactions to be included, although the mechanism will target half of that, around 475 here. The protocol adjusts the basefee to apply economic pressure, towards a target gas usage of 10M per block. SimulationWe import a few classes from our `abm1559` package.
###Code
%config InlineBackend.figure_format = 'svg'
import os, sys
sys.path.insert(1, os.path.realpath(os.path.pardir))
# You may remove the two lines above if you have installed abm1559 from pypi
from abm1559.utils import constants
from abm1559.txpool import TxPool
from abm1559.users import User1559
from abm1559.userpool import UserPool
from abm1559.chain import (
Chain,
Block1559,
)
from abm1559.simulator import (
spawn_poisson_demand,
update_basefee,
)
import pandas as pd
###Output
_____no_output_____
###Markdown
And define the main function used to simulate the fee market.
###Code
def simulate(demand_scenario, UserClass):
# Instantiate a couple of things
txpool = TxPool()
basefee = constants["INITIAL_BASEFEE"]
chain = Chain()
metrics = []
user_pool = UserPool()
for t in range(len(demand_scenario)):
if t % 100 == 0: print(t)
# `env` is the "environment" of the simulation
env = {
"basefee": basefee,
"current_block": t,
}
# We return a demand drawn from a Poisson distribution.
# The parameter is given by `demand_scenario[t]`, and can vary
# over time.
users = spawn_poisson_demand(t, demand_scenario[t], UserClass)
# We query each new user with the current basefee value
# Users either return a transaction or None if they prefer to balk
decided_txs = user_pool.decide_transactions(users, env)
# New transactions are added to the transaction pool
txpool.add_txs(decided_txs)
# The best valid transactions are taken out of the pool for inclusion
selected_txs = txpool.select_transactions(env)
txpool.remove_txs([tx.tx_hash for tx in selected_txs])
# We create a block with these transactions
block = Block1559(txs = selected_txs, parent_hash = chain.current_head, height = t, basefee = basefee)
# The block is added to the chain
chain.add_block(block)
# A couple of metrics we will use to monitor the simulation
row_metrics = {
"block": t,
"basefee": basefee / (10 ** 9),
"users": len(users),
"decided_txs": len(decided_txs),
"included_txs": len(selected_txs),
"blk_avg_gas_price": block.average_gas_price(),
"blk_avg_tip": block.average_tip(),
"pool_length": txpool.pool_length(),
}
metrics.append(row_metrics)
# Finally, basefee is updated and a new round starts
basefee = update_basefee(block, basefee)
return (pd.DataFrame(metrics), user_pool, chain)
###Output
_____no_output_____
###Markdown
As you can see, `simulate` takes in a `demand_scenario` array. Earlier we mentioned that each round, we draw the number of users wishing to send transactions from a Poisson distribution. [This distribution is parameterised by the expected number of arrivals, called _lambda_ $\lambda$](https://en.wikipedia.org/wiki/Poisson_distribution). The `demand_scenario` array contains a sequence of such lambda's. We also provide in `UserClass` the type of user we would like to model (see the [docs](http://barnabemonnot.com/abm1559/build/html/users) for more details).Our users draw their _value_ for the transaction (per unit of gas) from a uniform distribution, picking a random number between 0 and 20 (Gwei). Their cost for waiting one extra unit of time is drawn from a uniform distribution too, this time between 0 and 1 (Gwei). The closer their cost is to 1, the more impatient users are.Say for instance that I value each unit of gas at 15 Gwei, and my cost per round is 0.5 Gwei. If I wait for 6 blocks to be included at a gas price of 10 Gwei, my payoff is $15 - 6 \times 0.5 - 10 = 2$.The numbers above sound arbitrary, and in a sense they are! They were chosen to respect the scales we are used to ([although gas prices are closer to 100 Gweis these days...](https://ethereum.github.io/rig/ethdata/notebooks/gas_weather_reports/exploreJuly21.html)). It also turns out that any distribution (uniform, Pareto, whatever floats your boat) leads to stationarity. The important part is that _some_ users have positive value for transacting in the first place, enough to fill a block to its target size at least. The choice of sample the cost from a uniform distribution, as opposed to having all users experience the same cost per round, allows for **simulating a scenario where some users are more in a hurry than others**.
###Code
demand_scenario = [2000 for i in range(200)]
(df, user_pool, chain) = simulate(demand_scenario, User1559)
###Output
0
100
###Markdown
To study the stationary case, we create an array repeating $\lambda$ for as many blocks as we wish to simulate the market for. We set $\lambda$ to spawn on average 2000 users between two blocks. ResultsLet's print the head and tail of the data frame holding our metrics. Each row corresponds to one round of our simulation, so one block.
###Code
df
###Output
_____no_output_____
###Markdown
At the start of the simulation we clearly see in column `users` a demand close to 2000 users per round. Among these 2000 or so, around 1500 decide to send their transaction in (`decided_txs`). The 500 who don't might have a low value or high per-round costs, meaning it is unprofitable for them to even send their transaction in. Eventually 952 of them are included (`included_txs`), maxing out the block gas limit. The basefee starts at 1 Gwei but steadily increases from there, reaching around 11.8 Gwei by the end.By the end of the simulation, we note that `decided_txs` is always equal to `included_txs`. By this point, the basefee has risen enough to make it unprofitable for most users to send their transactions. This is exactly what we want! Users balk at the current prices.In the next chart we show the evolution of basefee and tips. We define _tip_ as the gas price minus the basefee, which is what _miners_ receive from the transaction.Note that [tip is in general **not** equal to the gas premium](https://twitter.com/barnabemonnot/status/1284271520311848960) that users set. This is particularly true when basefee plus gas premium exceeds the max fee of the user. In the graph below, the tip hovers around 1 Gwei (the premium), but is sometimes less than 1 too, especially when users see the prevailing basefee approach their posted max fees.
###Code
df.plot("block", ["basefee", "blk_avg_tip"])
###Output
_____no_output_____
###Markdown
Notice the increase at the beginning followed by a short drop? At the very beginning, the pool fills up quickly with many users hopeful to get their transactions in with a positive resulting payoff. The basefee increases until users start balking **and** the pool is exhausted. Once exhausted, basefee starts decreasing again to settle at the stationary point where the pool only includes transactions that are invalid given the stationary basefee.We can see the pool length becoming stationary in the next plot, showing the length of the pool over time.
###Code
df.plot("block", "pool_length")
###Output
_____no_output_____
###Markdown
The remaining transactions are likely from early users who did not balk even though basefee was increasing, and who were quickly outbid by others. Demand shockWe look at a stationary setting, where the new demand coming in each new round follows a fixed expected rate of arrival. Demand shocks may be of two kinds:- Same number of users, different values for transactions and costs for waiting.- Increased number of users, same values and costs.We'll consider the second scenario here, simply running the simulation again and increasing the $\lambda$ parameter of our Poisson arrival process suddenly, from expecting 2000, to expecting 6000 users per round.
###Code
demand_scenario = [2000 for i in range(100)] + [6000 for i in range(100)]
(df_jump, user_pool_jump, chain_jump) = simulate(demand_scenario, User1559)
###Output
0
100
###Markdown
The next plot shows the number of new users each round. We note at block 100 a sudden jump from around 2000 new users to 6000.
###Code
df_jump.plot("block", "users")
df_jump.plot("block", ["basefee", "blk_avg_tip"])
###Output
_____no_output_____
###Markdown
We see a jump around block 100, when the arrival rate of users switches from 2000 to 6000. The basefee increases in response. With a block limit of 20M gas, about 950 transactions fit into each block. Targeting half of this value, the basefee increases until more or less 475 transactions are included in each block.Since our users' values and costs are always drawn from the same distribution, when 2000 users show up, we expect to let in about 25% of them (~ 475 / 2000), the 25% with greatest expected payoff. When 6000 users come in, we now only expect the "richest" 8% (~ 475 / 6000) to get in, so we "raise the bar" for the basefee, since we need to discriminate more.
###Code
df_jump.plot("block", ["pool_length", "users", "decided_txs", "included_txs"])
###Output
_____no_output_____
###Markdown
As we see with the graph above, for a short while after block 100, blocks include more than the usual ~475 transactions. This is the transition between the old and the new stationary points.Since we have a lot more new users each round, more of them are willing and able to pay for their transactions above the current basefee, and so get included. This keeps happening until the basefee reaches a new stationary level. Changing expected timeUp until now, users decided whether to join the transaction pool or not based on the expectation that they would be included at least 5 blocks after they join. They evaluated their payoff assuming that basefee did not change (due to stationarity) for these 5 blocks. If their value for transacting minus the cost of waiting for 5 blocks minus the cost of transacting is positive, they sent their transactions in!$$ \texttt{payoff} = \texttt{value} - \texttt{cost from waiting 5 blocks} - \texttt{transaction fee} > 0 $$Under a stationary demand however, users can expect to be included in the next block. So let's have user expect to be included in the next block, right after their appearance, and see what happens. We do this by subclassing our `User1559` agent and overriding its `expected_time` method.
###Code
class OptimisticUser(User1559):
def expected_time(self, env):
return 0
demand_scenario = [2000 for i in range(100)] + [6000 for i in range(100)]
(df_opti, user_pool_opti, chain_opti) = simulate(demand_scenario, OptimisticUser)
df_opti.plot("block", ["basefee", "blk_avg_tip"])
###Output
_____no_output_____
###Markdown
The plot looks the same as before. But let's look at the average basefee for the last 50 blocks in this scenario and the last.
###Code
df_opti[(df.block > 150)][["basefee"]].mean()
df_jump[(df.block > 150)][["basefee"]].mean()
###Output
_____no_output_____
###Markdown
When users expect to be included in the next block rather than wait for at least 5, the basefee increases! This makes sense if we come back to our payoff definition:$$ \texttt{payoff} = \texttt{value} - \texttt{cost from waiting} - \texttt{transaction fee} $$The estimated cost for waiting is lower now since users estimate they'll be included in the next block and not wait 5 blocks to get in. Previously, some users with high values but high time preferences might have been discouraged to join the pool. Now these users don't expect to wait as much, and since their values are high, they don't mind bidding for a higher basefee either. We can check indeed that on average, users included in this last scenario have higher values than users included in the previous one.To do so, we export to pandas `DataFrame`s the user pool (to obtain their values and costs) and the chain (to obtain the addresses of included users in the last 50 blocks).
###Code
user_pool_opti_df = user_pool_opti.export().rename(columns={ "pub_key": "sender" })
chain_opti_df = chain_opti.export()
###Output
_____no_output_____
###Markdown
Let's open these up and have a look at the data. `user_pool_opti_df` registers all users we spawned in our simulation.
###Code
user_pool_opti_df.tail()
###Output
_____no_output_____
###Markdown
Meanwhile, `chain_opti_df` lists all the transactions included in the chain.
###Code
chain_opti_df.tail()
###Output
_____no_output_____
###Markdown
With a simple join on the `sender` column we can associate each user with their included transaction. We look at the average value of included users after the second stationary point.
###Code
chain_opti_df[(chain_opti_df.block_height >= 150)].join(
user_pool_opti_df.set_index("sender"), on="sender"
)[["value"]].mean()
###Output
_____no_output_____
###Markdown
When users expect to be included at least one block after they send their transaction, the average value of included users is around 19.2 Gwei.
###Code
user_pool_jump_df = user_pool_jump.export().rename(columns={ "pub_key": "sender" })
chain_jump_df = chain_jump.export()
chain_jump_df[(chain_jump_df.block_height >= 150)].join(
user_pool_jump_df.set_index("sender"), on="sender"
)[["value"]].mean()
###Output
_____no_output_____
###Markdown
But when users expect to be included at least _five_ blocks after, the average value of included users is around 18.7 Gwei, confirming that when users expect next block inclusion, higher value users get in and raise the basefee in the process. ConclusionWe've looked at 1559 when users with their own values and costs decide whether to join the pool or not based on the current basefee level. These users estimate their ultimate payoff by assuming _stationarity_: the demand between rounds follows the same arrival process and the same distribution of values and costs. In this stationary environment, basefee settles on some value and mostly stays there, allowing users to estimate their payoff should they wait for five or one blocks to be included.We've again left aside some important questions. Here all users simply leave a 1 Gwei premium in their transactions. In reality, we should expect users to attempt to "game" the system by leaving higher tips to get in first. We can suppose that in a stationary environment, "gaming" is only possible until basefee reaches its stationary point (during the transition period) and exhausts the feasible demand. We will leave this question for another notebook.(Temporary) non-stationarity is more interesting. The [5% meme](https://insights.deribit.com/market-research/analysis-of-eip-2593-escalator/) during which sudden demand shocks precipitate a large influx of new, high-valued transactions should also see users try to outcompete each other based on premiums alone, until basefee catches up. The question of whether 1559 offers anything in this case or whether the whole situation would look like a first price auction may be better settled empirically, but we can intuit that 1559 would smooth the process slightly by [offering a (laggy) price oracle](https://twitter.com/onurhsolmaz/status/1286068365812011009).And then we have the question of miner collusion, which rightfully agitates a lot of the ongoing conversation. In the simulations we do here, we instantiated one transaction pool only, which should tell you that we are looking at a "centralised", honest miner that includes transactions as much as possible, and not a collection or a cartel of miners cooperating. We can of course weaken this assumption and have several mining pools with their own behaviours and payoff evaluations, much like we modelled our users. We still would like to have a good theoretical understanding of the risks and applicability of miner collusion strategies. Onward!--- (Bonus) Ex post individual rationality_Individual rationality_ is the idea that agents won't join a mechanism unless they hope to make some positive payoff out of it. I'd rather not transact if my value for transacting minus my costs is negative.In general, we like this property and we want to make the mechanism individually rational to as many agents as possible. Yet, some mechanisms fail to satisfy _ex post_ individual rationality: I might _expect_ to make a positive payoff from the mechanism, but some _realisation_ of the mechanism exists where my payoff is negative.Take an auction. As long as my bid is lower or equal to my value for the auctioned item, the mechanism is ex post individually rational for me: I can never "overpay". If I value the item for 10 ETH and decide to bid 11 ETH, in a first-price auction where I pay for my bid if I have the highest, there is a realisation of the mechanism where I am the winner and I am asked to pay 11 ETH. My payoff is -1 ETH then.In the transaction fee market, ex post individual rationality is not guaranteed unless I can cancel my transaction. In the simulations here, we do not offer this option to our agents. They expect to wait for inclusion for a certain amount of blocks, and evaluate whether their payoff after that wait is positive or not to decide whether to send their transaction or not. However, some agents might wait longer than their initial estimation, in particular before the mechanism reaches stationarity. Some realisations of the mechanism then yield a negative payoff for these agents, and the mechanism is not ex post individually rational.Let's look at the agents' payoff using the transcript of transactions included in the chain. For each transaction, we want to find out what was the ultimate payoff for the agent who sent it in. If the transaction was included much later than the agent's initial estimation, this payoff is negative, and the mechanism wasn't ex post individually rational to them.
###Code
user_pool_df = user_pool.export().rename(columns={ "pub_key": "sender" })
chain_df = chain.export()
user_txs_df = chain_df.join(user_pool_df.set_index("sender"), on="sender")
###Output
_____no_output_____
###Markdown
In the next chunk we obtain the users' payoffs: their value minus the costs incurred from the transaction fee and the time they waited.
###Code
user_txs_df["payoff"] = user_txs_df.apply(
lambda row: row.user.payoff({
"current_block": row.block_height,
"gas_price": row.tx.gas_price({
"basefee": row.basefee * (10 ** 9) # we need basefee in wei
})
}) / (10 ** 9), # put payoff is in Gwei
axis = 1
)
user_txs_df["epir"] = user_txs_df.payoff.apply(
lambda payoff: payoff >= 0
)
###Output
_____no_output_____
###Markdown
Now we count the fraction of users in each block who received a positive payoff.
###Code
epir_df = pd.concat([
user_txs_df[["block_height", "tx_hash"]].groupby(["block_height"]).agg(["count"]),
user_txs_df[["block_height", "epir"]][user_txs_df.epir == True].groupby(["block_height"]).agg(["count"])
], axis = 1)
epir_df["percent_epir"] = epir_df.apply(
lambda row: row.epir / row.tx_hash * 100,
axis = 1
)
###Output
_____no_output_____
###Markdown
Let's plot it!
###Code
epir_df.reset_index().plot("block_height", ["percent_epir"])
###Output
_____no_output_____
###Markdown
At the very beginning, all users (100%) have positive payoff. They have only waited for 1 block to get included. This percentage steadily drops, as basefee increases: some high value users waiting in the pool get included much later than they expected, netting a negative payoff.Once we pass the initial instability (while basefee is looking for its stationary value), all users receive a positive payoff. This is somewhat expected: once basefee has increased enough to weed out excess demand, users are pretty much guaranteed to be included in the next block, and so the realised waiting time will always be less than their estimate. ---_Check out also:_ A recent [ethresear.ch post](https://ethresear.ch/t/a-mechanism-for-daily-autonomous-gas-price-stabilization/7762) by [Onur Solmaz](https://twitter.com/onurhsolmaz), on a 1559-inspired mechanism for daily gas price stabilization, with simulations.
###Code
Stationary behaviour of EIP 1559 agent-based model
// References + footnotes
// Authors
let authorData = ["barnabe"];
Many thanks to Sacha for his comments, edits and corrections (all errors remain mine); Dan Finlay for prompting a live discussion of this notebook in a recent call.
###Output
_____no_output_____
###Markdown
Stationary behaviour of EIP 1559 agent-based model July 2020, [@barnabemonnot](https://twitter.com/barnabemonnot) [Robust Incentives Group](https://github.com/ethereum/rig), Ethereum Foundation---We introduce here the building blocks of agent-based simulations of EIP1559. This follows an [earlier notebook](https://nbviewer.jupyter.org/github/ethereum/rig/blob/master/eip1559/eip1559.ipynb) that merely looked at the dynamics of the EIP 1559 mechanism. In the present notebook, agents decide on transactions based on the current basefee and form their transactions based on internal evaluations of their values and costs.[Huberman et al., 2019](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3025604) introduced such a model and framework for the Bitcoin payment system. We adapt it here to study the dynamics of the basefee.All the code is available in [this repo](https://github.com/barnabemonnot/abm1559), with some preliminary documentation [here](https://barnabemonnot.com/abm1559/build/html/). You can also download the [`abm1559` package from PyPi](https://pypi.org/project/abm1559/) and reproduce all the analysis here yourself! The broad linesWe have several entities. _Users_ come in randomly (following a Poisson process) and create and send transactions. The transactions are received by a _transaction pool_, from which the $x$ best _valid_ transactions are included in a _block_ created at fixed intervals. $x$ depends on how many valid transactions exist in the pool (e.g., how many post a gasprice exceeding the prevailing basefee in 1559 paradigm) and the block gas limit. Once transactions are included in the block, and the block is included in the _chain_, transactions are removed from the transaction pool.How do users set their parameters? Users have their own internal ways of evaluating their _costs_. Users obtain a certain _value_ from having their transaction included, which we call $v$. $v$ is different for every user. This value is fixed but their overall _payoff_ decreases the longer they wait to be included. Some users have higher time preferences than others, and their payoff decreases faster than others the longer they wait. Put together, we have the following:$$ \texttt{payoff} = \texttt{value} - \texttt{cost from waiting} - \texttt{transaction fee} $$Users expect to wait for a certain amount of time. In this essay, we set this to a fixed value -- somewhat arbitrarily we choose 5. This can be readily understood in the following way. Users estimate what their payoff will be from getting included 5 blocks from now, assuming basefee remains constant. If this payoff is negative, they decide not to send the transaction to the pool (in queuing terminology, they _balk_). We'll play with this assumption later.The scenario is set up this way to study _stationarity_: assuming some demand comes in from a fixed distribution at regular intervals, we must expect basefee to reach some stationary value and stay there. It is then reasonable for users, at this stationary point, to consider that 5 blocks from now basefee will still be at the same level. In the nonstationary case, when for instance a systemic change in the demand happens (e.g., the rate of Poisson arrivals increases), a user may want to hedge their bets by estimating their future payoffs in a different way, taking into account that basefee might increase instead. This strategy would probably be a good idea during the _transition_ phase, when basefee shifts from one stationary point to a new one.We make the assumption here that users choose their 1559 parameters based on their value alone. We set the transaction `max_fee` parameter to the value of the user and set the `gas_premium` parameter to a residual value -- 1 Gwei per unit of gas.There is no loss of generality in assuming all users send the same transaction in (e.g., a simple transfer) and so all transactions have the same `gas_used` value (21,000). In 1559 paradigm, with a 20M gas limit per block, this allows at most 952 transactions to be included, although the mechanism will target half of that, around 475 here. The protocol adjusts the basefee to apply economic pressure, towards a target gas usage of 10M per block. SimulationWe import a few classes from our `abm1559` package.
###Code
import os, sys
sys.path.insert(1, os.path.realpath(os.path.pardir))
# You may remove the two lines above if you have installed abm1559 from pypi
from abm1559.utils import constants
from abm1559.txpool import TxPool
from abm1559.users import User1559
from abm1559.userpool import UserPool
from abm1559.chain import (
Chain,
Block1559,
)
from abm1559.simulator import (
spawn_poisson_demand,
update_basefee,
)
import pandas as pd
###Output
_____no_output_____
###Markdown
And define the main function used to simulate the fee market.
###Code
def simulate(demand_scenario, UserClass):
# Instantiate a couple of things
txpool = TxPool()
basefee = constants["INITIAL_BASEFEE"]
chain = Chain()
metrics = []
user_pool = UserPool()
for t in range(len(demand_scenario)):
if t % 100 == 0: print(t)
# `env` is the "environment" of the simulation
env = {
"basefee": basefee,
"current_block": t,
}
# We return a demand drawn from a Poisson distribution.
# The parameter is given by `demand_scenario[t]`, and can vary
# over time.
users = spawn_poisson_demand(t, demand_scenario[t], UserClass)
# We query each new user with the current basefee value
# Users either return a transaction or None if they prefer to balk
decided_txs = user_pool.decide_transactions(users, env)
# New transactions are added to the transaction pool
txpool.add_txs(decided_txs)
# The best valid transactions are taken out of the pool for inclusion
selected_txs = txpool.select_transactions(env)
txpool.remove_txs([tx.tx_hash for tx in selected_txs])
# We create a block with these transactions
block = Block1559(txs = selected_txs, parent_hash = chain.current_head, height = t, basefee = basefee)
# The block is added to the chain
chain.add_block(block)
# A couple of metrics we will use to monitor the simulation
row_metrics = {
"block": t,
"basefee": basefee / (10 ** 9),
"users": len(users),
"decided_txs": len(decided_txs),
"included_txs": len(selected_txs),
"blk_avg_gas_price": block.average_gas_price(),
"blk_avg_tip": block.average_tip(),
"pool_length": txpool.pool_length,
}
metrics.append(row_metrics)
# Finally, basefee is updated and a new round starts
basefee = update_basefee(block, basefee)
return (pd.DataFrame(metrics), user_pool, chain)
###Output
_____no_output_____
###Markdown
As you can see, `simulate` takes in a `demand_scenario` array. Earlier we mentioned that each round, we draw the number of users wishing to send transactions from a Poisson distribution. [This distribution is parameterised by the expected number of arrivals, called _lambda_ $\lambda$](https://en.wikipedia.org/wiki/Poisson_distribution). The `demand_scenario` array contains a sequence of such lambda's. We also provide in `UserClass` the type of user we would like to model (see the [docs](http://barnabemonnot.com/abm1559/build/html/users) for more details).Our users draw their _value_ for the transaction (per unit of gas) from a uniform distribution, picking a random number between 0 and 20 (Gwei). Their cost for waiting one extra unit of time is drawn from a uniform distribution too, this time between 0 and 1 (Gwei). The closer their cost is to 1, the more impatient users are.Say for instance that I value each unit of gas at 15 Gwei, and my cost per round is 0.5 Gwei. If I wait for 6 blocks to be included at a gas price of 10 Gwei, my payoff is $15 - 6 \times 0.5 - 10 = 2$.The numbers above sound arbitrary, and in a sense they are! They were chosen to respect the scales we are used to ([although gas prices are closer to 100 Gweis these days...](https://ethereum.github.io/rig/ethdata/notebooks/gas_weather_reports/exploreJuly21.html)). It also turns out that any distribution (uniform, Pareto, whatever floats your boat) leads to stationarity. The important part is that _some_ users have positive value for transacting in the first place, enough to fill a block to its target size at least. The choice of sample the cost from a uniform distribution, as opposed to having all users experience the same cost per round, allows for **simulating a scenario where some users are more in a hurry than others**.
###Code
demand_scenario = [2000 for i in range(200)]
(df, user_pool, chain) = simulate(demand_scenario, User1559)
###Output
0
100
###Markdown
To study the stationary case, we create an array repeating $\lambda$ for as many blocks as we wish to simulate the market for. We set $\lambda$ to spawn on average 2000 users between two blocks. ResultsLet's print the head and tail of the data frame holding our metrics. Each row corresponds to one round of our simulation, so one block.
###Code
df
###Output
_____no_output_____
###Markdown
At the start of the simulation we clearly see in column `users` a demand close to 2000 users per round. Among these 2000 or so, around 1500 decide to send their transaction in (`decided_txs`). The 500 who don't might have a low value or high per-round costs, meaning it is unprofitable for them to even send their transaction in. Eventually 952 of them are included (`included_txs`), maxing out the block gas limit. The basefee starts at 1 Gwei but steadily increases from there, reaching around 11.8 Gwei by the end.By the end of the simulation, we note that `decided_txs` is always equal to `included_txs`. By this point, the basefee has risen enough to make it unprofitable for most users to send their transactions. This is exactly what we want! Users balk at the current prices.In the next chart we show the evolution of basefee and tips. We define _tip_ as the gas price minus the basefee, which is what _miners_ receive from the transaction.Note that [tip is in general **not** equal to the gas premium](https://twitter.com/barnabemonnot/status/1284271520311848960) that users set. This is particularly true when basefee plus gas premium exceeds the max fee of the user. In the graph below, the tip hovers around 1 Gwei (the premium), but is sometimes less than 1 too, especially when users see the prevailing basefee approach their posted max fees.
###Code
df.plot("block", ["basefee", "blk_avg_tip"])
###Output
_____no_output_____
###Markdown
Notice the increase at the beginning followed by a short drop? At the very beginning, the pool fills up quickly with many users hopeful to get their transactions in with a positive resulting payoff. The basefee increases until users start balking **and** the pool is exhausted. Once exhausted, basefee starts decreasing again to settle at the stationary point where the pool only includes transactions that are invalid given the stationary basefee.We can see the pool length becoming stationary in the next plot, showing the length of the pool over time.
###Code
df.plot("block", "pool_length")
###Output
_____no_output_____
###Markdown
The remaining transactions are likely from early users who did not balk even though basefee was increasing, and who were quickly outbid by others. Demand shockWe look at a stationary setting, where the new demand coming in each new round follows a fixed expected rate of arrival. Demand shocks may be of two kinds:- Same number of users, different values for transactions and costs for waiting.- Increased number of users, same values and costs.We'll consider the second scenario here, simply running the simulation again and increasing the $\lambda$ parameter of our Poisson arrival process suddenly, from expecting 2000, to expecting 6000 users per round.
###Code
demand_scenario = [2000 for i in range(100)] + [6000 for i in range(100)]
(df_jump, user_pool_jump, chain_jump) = simulate(demand_scenario, User1559)
###Output
0
100
###Markdown
The next plot shows the number of new users each round. We note at block 100 a sudden jump from around 2000 new users to 6000.
###Code
df_jump.plot("block", "users")
df_jump.plot("block", ["basefee", "blk_avg_tip"])
###Output
_____no_output_____
###Markdown
We see a jump around block 100, when the arrival rate of users switches from 2000 to 6000. The basefee increases in response. With a block limit of 20M gas, about 950 transactions fit into each block. Targeting half of this value, the basefee increases until more or less 475 transactions are included in each block.Since our users' values and costs are always drawn from the same distribution, when 2000 users show up, we expect to let in about 25% of them (~ 475 / 2000), the 25% with greatest expected payoff. When 6000 users come in, we now only expect the "richest" 8% (~ 475 / 6000) to get in, so we "raise the bar" for the basefee, since we need to discriminate more.
###Code
df_jump.plot("block", ["pool_length", "users", "decided_txs", "included_txs"])
###Output
_____no_output_____
###Markdown
As we see with the graph above, for a short while after block 100, blocks include more than the usual ~475 transactions. This is the transition between the old and the new stationary points.Since we have a lot more new users each round, more of them are willing and able to pay for their transactions above the current basefee, and so get included. This keeps happening until the basefee reaches a new stationary level. Changing expected timeUp until now, users decided whether to join the transaction pool or not based on the expectation that they would be included at least 5 blocks after they join. They evaluated their payoff assuming that basefee did not change (due to stationarity) for these 5 blocks. If their value for transacting minus the cost of waiting for 5 blocks minus the cost of transacting is positive, they sent their transactions in!$$ \texttt{payoff} = \texttt{value} - \texttt{cost from waiting 5 blocks} - \texttt{transaction fee} > 0 $$Under a stationary demand however, users can expect to be included in the next block. So let's have user expect to be included in the next block, right after their appearance, and see what happens. We do this by subclassing our `User1559` agent and overriding its `expected_time` method.
###Code
class OptimisticUser(User1559):
def expected_time(self, env):
return 0
demand_scenario = [2000 for i in range(100)] + [6000 for i in range(100)]
(df_opti, user_pool_opti, chain_opti) = simulate(demand_scenario, OptimisticUser)
df_opti.plot("block", ["basefee", "blk_avg_tip"])
###Output
_____no_output_____
###Markdown
The plot looks the same as before. But let's look at the average basefee for the last 50 blocks in this scenario and the last.
###Code
df_opti[(df.block > 150)][["basefee"]].mean()
df_jump[(df.block > 150)][["basefee"]].mean()
###Output
_____no_output_____
###Markdown
When users expect to be included in the next block rather than wait for at least 5, the basefee increases! This makes sense if we come back to our payoff definition:$$ \texttt{payoff} = \texttt{value} - \texttt{cost from waiting} - \texttt{transaction fee} $$The estimated cost for waiting is lower now since users estimate they'll be included in the next block and not wait 5 blocks to get in. Previously, some users with high values but high time preferences might have been discouraged to join the pool. Now these users don't expect to wait as much, and since their values are high, they don't mind bidding for a higher basefee either. We can check indeed that on average, users included in this last scenario have higher values than users included in the previous one.To do so, we export to pandas `DataFrame`s the user pool (to obtain their values and costs) and the chain (to obtain the addresses of included users in the last 50 blocks).
###Code
user_pool_opti_df = user_pool_opti.export().rename(columns={ "pub_key": "sender" })
chain_opti_df = chain_opti.export()
###Output
_____no_output_____
###Markdown
Let's open these up and have a look at the data. `user_pool_opti_df` registers all users we spawned in our simulation.
###Code
user_pool_opti_df.tail()
###Output
_____no_output_____
###Markdown
Meanwhile, `chain_opti_df` lists all the transactions included in the chain.
###Code
chain_opti_df.tail()
###Output
_____no_output_____
###Markdown
With a simple join on the `sender` column we can associate each user with their included transaction. We look at the average value of included users after the second stationary point.
###Code
chain_opti_df[(chain_opti_df.block_height >= 150)].join(
user_pool_opti_df.set_index("sender"), on="sender"
)[["value"]].mean()
###Output
_____no_output_____
###Markdown
When users expect to be included at least one block after they send their transaction, the average value of included users is around 19.2 Gwei.
###Code
user_pool_jump_df = user_pool_jump.export().rename(columns={ "pub_key": "sender" })
chain_jump_df = chain_jump.export()
chain_jump_df[(chain_jump_df.block_height >= 150)].join(
user_pool_jump_df.set_index("sender"), on="sender"
)[["value"]].mean()
###Output
_____no_output_____
###Markdown
But when users expect to be included at least _five_ blocks after, the average value of included users is around 18.7 Gwei, confirming that when users expect next block inclusion, higher value users get in and raise the basefee in the process. ConclusionWe've looked at 1559 when users with their own values and costs decide whether to join the pool or not based on the current basefee level. These users estimate their ultimate payoff by assuming _stationarity_: the demand between rounds follows the same arrival process and the same distribution of values and costs. In this stationary environment, basefee settles on some value and mostly stays there, allowing users to estimate their payoff should they wait for five or one blocks to be included.We've again left aside some important questions. Here all users simply leave a 1 Gwei premium in their transactions. In reality, we should expect users to attempt to "game" the system by leaving higher tips to get in first. We can suppose that in a stationary environment, "gaming" is only possible until basefee reaches its stationary point (during the transition period) and exhausts the feasible demand. We will leave this question for another notebook.(Temporary) non-stationarity is more interesting. The [5% meme](https://insights.deribit.com/market-research/analysis-of-eip-2593-escalator/) during which sudden demand shocks precipitate a large influx of new, high-valued transactions should also see users try to outcompete each other based on premiums alone, until basefee catches up. The question of whether 1559 offers anything in this case or whether the whole situation would look like a first price auction may be better settled empirically, but we can intuit that 1559 would smooth the process slightly by [offering a (laggy) price oracle](https://twitter.com/onurhsolmaz/status/1286068365812011009).And then we have the question of miner collusion, which rightfully agitates a lot of the ongoing conversation. In the simulations we do here, we instantiated one transaction pool only, which should tell you that we are looking at a "centralised", honest miner that includes transactions as much as possible, and not a collection or a cartel of miners cooperating. We can of course weaken this assumption and have several mining pools with their own behaviours and payoff evaluations, much like we modelled our users. We still would like to have a good theoretical understanding of the risks and applicability of miner collusion strategies. Onward!--- (Bonus) Ex post individual rationality_Individual rationality_ is the idea that agents won't join a mechanism unless they hope to make some positive payoff out of it. I'd rather not transact if my value for transacting minus my costs is negative.In general, we like this property and we want to make the mechanism individually rational to as many agents as possible. Yet, some mechanisms fail to satisfy _ex post_ individual rationality: I might _expect_ to make a positive payoff from the mechanism, but some _realisation_ of the mechanism exists where my payoff is negative.Take an auction. As long as my bid is lower or equal to my value for the auctioned item, the mechanism is ex post individually rational for me: I can never "overpay". If I value the item for 10 ETH and decide to bid 11 ETH, in a first-price auction where I pay for my bid if I have the highest, there is a realisation of the mechanism where I am the winner and I am asked to pay 11 ETH. My payoff is -1 ETH then.In the transaction fee market, ex post individual rationality is not guaranteed unless I can cancel my transaction. In the simulations here, we do not offer this option to our agents. They expect to wait for inclusion for a certain amount of blocks, and evaluate whether their payoff after that wait is positive or not to decide whether to send their transaction or not. However, some agents might wait longer than their initial estimation, in particular before the mechanism reaches stationarity. Some realisations of the mechanism then yield a negative payoff for these agents, and the mechanism is not ex post individually rational.Let's look at the agents' payoff using the transcript of transactions included in the chain. For each transaction, we want to find out what was the ultimate payoff for the agent who sent it in. If the transaction was included much later than the agent's initial estimation, this payoff is negative, and the mechanism wasn't ex post individually rational to them.
###Code
user_pool_df = user_pool.export().rename(columns={ "pub_key": "sender" })
chain_df = chain.export()
user_txs_df = chain_df.join(user_pool_df.set_index("sender"), on="sender")
###Output
_____no_output_____
###Markdown
In the next chunk we obtain the users' payoffs: their value minus the costs incurred from the transaction fee and the time they waited.
###Code
user_txs_df["payoff"] = user_txs_df.apply(
lambda row: row.user.payoff({
"current_block": row.block_height,
"gas_price": row.tx.gas_price({
"basefee": row.basefee * (10 ** 9) # we need basefee in wei
})
}) / (10 ** 9), # put payoff is in Gwei
axis = 1
)
user_txs_df["epir"] = user_txs_df.payoff.apply(
lambda payoff: payoff >= 0
)
###Output
_____no_output_____
###Markdown
Now we count the fraction of users in each block who received a positive payoff.
###Code
epir_df = pd.concat([
user_txs_df[["block_height", "tx_hash"]].groupby(["block_height"]).agg(["count"]),
user_txs_df[["block_height", "epir"]][user_txs_df.epir == True].groupby(["block_height"]).agg(["count"])
], axis = 1)
epir_df["percent_epir"] = epir_df.apply(
lambda row: row.epir / row.tx_hash * 100,
axis = 1
)
###Output
_____no_output_____
###Markdown
Let's plot it!
###Code
epir_df.reset_index().plot("block_height", ["percent_epir"])
###Output
_____no_output_____
|
NeuralNetworks/OptimizingNetwork.ipynb
|
###Markdown
Neural Network Optimization and TuningYou've learned how to build computational graphs in PyTorch and compute gradients. The final piece to training a network is applying the gradients to update the network parameters. In this tutorial you will learn how to implement a number of optimization techniques in PyTorch along with other tuning methods.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from torchvision import transforms, datasets, models
import numpy as np
import matplotlib.pyplot as plt
from collections import namedtuple
from IPython.display import Image
%matplotlib inline
np.random.seed(2020)
###Output
_____no_output_____
###Markdown
We use the PyTorch dataset API to load a dataset with exactly the same properties as the MNIST handwritten digits dataset. However, instead of handwritten digits, this dataset contains images of 10 different **common clothing items**, hence the name **Fashion-MNIST** . Performance on MNIST saturates quickly with simple network architectures and optimization methods. This dataset is more difficult than MNIST and is useful to demonstrate the relative improvements of different optimization methods. Some of the characteristics are mentioned below.- 28x28 images- 10 classes- Single color channel (B&W)- Centered objects- 50000 training set members- 10000 test set members
###Code
# Fashion Class that enables the Dataset download and basic transformations
class Fashion(datasets.MNIST):
def __init__(self, root, train=True, transform=None, target_transform=None, download=False):
self.urls = [
'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz',
'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz',
'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz',
'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz',
]
super(Fashion, self).__init__(
root, train=train, transform=transform, target_transform=target_transform, download=download
)
def decode_label(l):
return ["Top",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Shirt",
"Sneaker",
"Bag",
"Ankle boot"
][l]
train_data = Fashion('data', train=True, download=True, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
test_data = Fashion('data', train=False, download=True, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
###Output
0it [00:00, ?it/s]Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/Fashion/raw/train-images-idx3-ubyte.gz
99%|█████████▉| 26132480/26421880 [00:06<00:00, 5399928.22it/s]Extracting data/Fashion/raw/train-images-idx3-ubyte.gz to data/Fashion/raw
0it [00:00, ?it/s][ADownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/Fashion/raw/train-labels-idx1-ubyte.gz
0%| | 0/29515 [00:00<?, ?it/s][A
56%|█████▌ | 16384/29515 [00:00<00:00, 132285.13it/s][A
0it [00:00, ?it/s][A[AExtracting data/Fashion/raw/train-labels-idx1-ubyte.gz to data/Fashion/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/Fashion/raw/t10k-images-idx3-ubyte.gz
0%| | 0/4422102 [00:00<?, ?it/s][A[A
0%| | 16384/4422102 [00:00<00:32, 137149.57it/s][A[A
1%| | 40960/4422102 [00:00<00:44, 97976.17it/s] [A[A
2%|▏ | 98304/4422102 [00:00<00:33, 128786.41it/s][A[A
5%|▍ | 212992/4422102 [00:01<00:24, 173985.88it/s][A[A
10%|▉ | 434176/4422102 [00:01<00:16, 238654.49it/s][A[A
15%|█▍ | 647168/4422102 [00:01<00:11, 320444.58it/s][A[A
27%|██▋ | 1187840/4422102 [00:01<00:07, 442308.79it/s][A[A
41%|████ | 1810432/4422102 [00:01<00:04, 613186.16it/s][A[A
55%|█████▍ | 2416640/4422102 [00:01<00:02, 838131.02it/s][A[A
64%|██████▎ | 2818048/4422102 [00:01<00:01, 1077138.04it/s][A[A
79%|███████▊ | 3481600/4422102 [00:01<00:00, 1431633.75it/s][A[A
96%|█████████▌| 4251648/4422102 [00:01<00:00, 1891269.47it/s][A[A
0it [00:00, ?it/s][A[A[AExtracting data/Fashion/raw/t10k-images-idx3-ubyte.gz to data/Fashion/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/Fashion/raw/t10k-labels-idx1-ubyte.gz
0%| | 0/5148 [00:00<?, ?it/s][A[A[AExtracting data/Fashion/raw/t10k-labels-idx1-ubyte.gz to data/Fashion/raw
Processing...
Done!
26427392it [00:18, 5399928.22it/s]
4423680it [00:21, 1891269.47it/s] [A[A
###Markdown
Random examples from the Fashion-MNIST dataset
###Code
idxs = np.random.randint(100, size=8)
f, a = plt.subplots(2, 4, figsize=(10, 5))
for i in range(8):
X = train_data.train_data[idxs[i]]
Y = train_data.train_labels[idxs[i]]
r, c = i // 4, i % 4
a[r][c].set_title(decode_label(Y))
a[r][c].axis('off')
a[r][c].imshow(X.numpy())
plt.draw()
###Output
32768it [00:18, 1766.41it/s]
4423680it [00:18, 243973.62it/s]
8192it [00:15, 526.69it/s]
26427392it [00:26, 983721.19it/s]
###Markdown
Build a modelAs we are more focussed on evaluating how the different optimization methods perform, we'll be constructing a very simple feedforward network.
###Code
class FashionModel(nn.Module):
def __init__(self):
super(FashionModel, self).__init__()
self.fc1 = nn.Linear(784, 64)
self.fc2 = nn.Linear(64, 32)
self.fc3 = nn.Linear(32, 10)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.log_softmax(self.fc3(x))
return x
print(FashionModel())
###Output
FashionModel(
(fc1): Linear(in_features=784, out_features=64, bias=True)
(fc2): Linear(in_features=64, out_features=32, bias=True)
(fc3): Linear(in_features=32, out_features=10, bias=True)
)
###Markdown
A Simple Optimizer in PyTorchThe most simple optimization method is to update the network parameters by adding the negative of the gradient scaled by a fixed learning rate $\eta$.$$ \textbf{W'} \leftarrow \textbf{W} - \eta \nabla L(\textbf{W}) $$PyTorch provides us a very simple and expressive API for creating custom optimizers. An optimizer in PyTorch subclasses `torch.optim.Optimizer` and is required to specify two methods.`__init__`: Must call the superclass `__init__` and provide a list of network parameters, `params`, to optimize and a dictionary of default values provided to each parameter group, `defaults`. `step`: Performs an update on the network parameters. The meat of your optimizer logic lies in the `step` method. In this method you should update your model parameters with the help of some useful internal datastructures. Let's define these to make the following code more clear.`self.param_groups`: When you initialize an optimizer object, you are required to provide the list of parameter objects to be optimized. In the case of `FashionModel`, there are 6 parameters -- each `Linear` layer has a weight matrix parameter and a bias vector. All of these 6 parameters are considered within a single `param_group`. This `group` will be a dictionary with an entry `params` that contains an iterable of all 6 parameters, as well as entries for all `defaults`. These `defaults` are generally useful for storing small values like hyperparameters that are standard across all parameter groups. There are more advanced cases where it can come in handy to have different values for certain entities depending on the `param_group`. `self.state`: This maintains state for a given parameter. Essentially it maps a parameter to a dictionary of data that you want to keep track of. This is useful in cases where you want to keep state on a per-parameter basis.**IMPORTANT**: Unlike most other use cases in PyTorch, operations on parameters and state data should be done inplace. This ensures that the updated parameters are not updated copies of the original. In the following sample implementations, you may see some unfamiliar operations. Functions like `torch.add_` and `torch.mul_` are just inplace analogues of standard PyTorch functions. See http://pytorch.org/docs/master/torch.html for further details. Let's write our *Trainer*
###Code
train_size = train_data.train_data.shape[0]
val_size, train_size = int(0.20 * train_size), int(0.80 * train_size) # 80 / 20 train-val split
test_size = test_data.test_data.shape[0]
batch_size = 100
# Add dataset to dataloader that handles batching
train_loader = torch.utils.data.DataLoader(train_data,
batch_size=batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(np.arange(val_size, val_size+train_size)))
val_loader = torch.utils.data.DataLoader(train_data,
batch_size=batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(np.arange(0, val_size)))
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, shuffle=False)
# Setup metric class
Metric = namedtuple('Metric', ['loss', 'train_error', 'val_error'])
###Output
_____no_output_____
###Markdown
How to adjust Learning Rate`torch.optim.lr_scheduler` provides several methods to adjust the learning rate based on the number of epochs. Learning rate scheduling should be applied after optimizer’s update. See https://pytorch.org/docs/stable/optim.html for details.
###Code
def inference(model, loader, n_members):
correct = 0
for data, label in loader:
X = Variable(data.view(-1, 784))
Y = Variable(label)
out = model(X)
pred = out.data.max(1, keepdim=True)[1]
predicted = pred.eq(Y.data.view_as(pred))
correct += predicted.sum()
return correct.numpy() / n_members
class Trainer():
"""
A simple training cradle
"""
def __init__(self, model, optimizer, load_path=None):
self.model = model
if load_path is not None:
self.model = torch.load(load_path)
self.optimizer = optimizer
def save_model(self, path):
torch.save(self.model.state_dict(), path)
def run(self, epochs):
print("Start Training...")
self.metrics = []
for e in range(n_epochs):
scheduler.step()
epoch_loss = 0
correct = 0
for batch_idx, (data, label) in enumerate(train_loader):
self.optimizer.zero_grad()
X = Variable(data.view(-1, 784))
Y = Variable(label)
out = self.model(X)
pred = out.data.max(1, keepdim=True)[1]
predicted = pred.eq(Y.data.view_as(pred))
correct += predicted.sum()
loss = F.nll_loss(out, Y)
loss.backward()
self.optimizer.step()
epoch_loss += loss.item()
total_loss = epoch_loss/train_size
train_error = 1.0 - correct/train_size
val_error = 1.0 - inference(self.model, val_loader, val_size)
print("epoch: {0}, loss: {1:.8f}".format(e+1, total_loss))
self.metrics.append(Metric(loss=total_loss,
train_error=train_error,
val_error=val_error))
### LET'S TRAIN ###
# A function to apply "normal" distribution on the parameters
def init_randn(m):
if type(m) == nn.Linear:
m.weight.data.normal_(0,1)
# We first initialize a Fashion Object and initialize the parameters "normally".
normalmodel = FashionModel()
normalmodel.apply(init_randn)
n_epochs = 8
print("SGD OPTIMIZER")
SGDOptimizer = torch.optim.SGD(normalmodel.parameters(), lr=0.01)
scheduler = torch.optim.lr_scheduler.StepLR(SGDOptimizer, step_size=4, gamma=0.1)
sgd_trainer = Trainer(normalmodel, SGDOptimizer)
sgd_trainer.run(n_epochs)
sgd_trainer.save_model('./sgd_model.pt')
print('')
print("ADAM OPTIMIZER")
normalmodel = FashionModel()
normalmodel.apply(init_randn)
AdamOptimizer = torch.optim.Adam(normalmodel.parameters(), lr=0.01)
scheduler = torch.optim.lr_scheduler.StepLR(AdamOptimizer, step_size=4, gamma=0.1)
adam_trainer = Trainer(normalmodel, AdamOptimizer)
adam_trainer.run(n_epochs)
adam_trainer.save_model('./adam_model.pt')
print('')
print("RMSPROP OPTIMIZER")
normalmodel = FashionModel()
normalmodel.apply(init_randn)
RMSPropOptimizer = torch.optim.RMSprop(normalmodel.parameters(), lr=0.01)
scheduler = torch.optim.lr_scheduler.StepLR(RMSPropOptimizer, step_size=4, gamma=0.1)
rms_trainer = Trainer(normalmodel, RMSPropOptimizer)
rms_trainer.run(n_epochs)
rms_trainer.save_model('./rmsprop_model.pt')
print('')
### TEST ###
model = FashionModel()
model.load_state_dict(torch.load('./sgd_model.pt'))
test_acc = inference(model, test_loader, test_size)
print("Test accuracy of model optimizer with SGD: {0:.2f}".format(test_acc * 100))
model = FashionModel()
model.load_state_dict(torch.load('./adam_model.pt'))
test_acc = inference(model, test_loader, test_size)
print("Test accuracy of model optimizer with Adam: {0:.2f}".format(test_acc * 100))
model = FashionModel()
model.load_state_dict(torch.load('./rmsprop_model.pt'))
test_acc = inference(model, test_loader, test_size)
print("Test accuracy of model optimizer with RMSProp: {0:.2f}".format(test_acc * 100))
### VISUALIZATION ###
def training_plot(metrics):
plt.figure(1)
plt.plot([m.loss for m in metrics], 'b')
plt.title('Training Loss')
plt.show()
training_plot(sgd_trainer.metrics)
training_plot(adam_trainer.metrics)
training_plot(rms_trainer.metrics)
###Output
_____no_output_____
###Markdown
Parameter InitializationWhile training a network, the initial value of the weights plays a significant role. In the extreme case, an oracle could just set the weights directly to values that minimize the objective function, and in practical cases a good initialization can bring us to a more favorable starting position in the parameter space. This raises the question of how to choose these weights. - What happens if all the weights are set to zero? The gradients become zero, and the network finds itself without a direction. - What if all of them are set to the same non-zero value? Although the gradients are no longer zero, each neuron has the same weight and follows the same gradient. Such neurons will continue to have the same value, since they're identical. So any initialization scheme must break this symmetry somehow, and randomly initializing the weights is a first step in that direction.Let's begin with creating a weight initialization function that samples from **N(0,1)**. A clean way of initializing the weights is to access the network parameters by traversing all modules inside the network, and then applying the desired initialization. This method also allows us to encapsulate all the initializations into a single function.
###Code
def init_randn(m):
if type(m) == nn.Linear:
m.weight.data.normal_(0,1)
###Output
_____no_output_____
###Markdown
Now let's use this scheme to initialize the network.Note that *apply(fn)* applies the function *fn* recursively to every submodule (as returned by .children()) as well as self. Also, since it is applied to itself as well, you must take care to select the appropriate type of module *m* and apply the initialization to it.
###Code
normalmodel = FashionModel()
normalmodel.apply(init_randn)
###Output
_____no_output_____
###Markdown
Custom initializationsWe could also choose a different way to initialize the weights, where you explicitly copy some values into the weights.
###Code
def init_custom(m):
if type(m) == nn.Linear:
rw = torch.randn(m.weight.data.size())
m.weight.data.copy_(rw)
###Output
_____no_output_____
###Markdown
Now let's use this initialization scheme to implement Xavier initialization. Xavier initialization is a way of initializing the weights such that the variance of the inputs is the same as the variance of the outputs. At each layer, the fan_in and fan_out (i.e. input connections and output connections) might be different. To calculate the variance, you will multiply each weight with the inputs. Evidently, if the number of inputs is less, they will need to be multiplied with higher weights so that they can sum up to the product of a larger number of outputs with smaller weights. This is the intuition behind Xavier initialization.
###Code
def init_xavier(m):
if type(m) == nn.Linear:
fan_in = m.weight.size()[1]
fan_out = m.weight.size()[0]
std = np.sqrt(2.0 / (fan_in + fan_out))
m.weight.data.normal_(0,std)
xaviermodel = FashionModel()
xaviermodel.apply(init_xavier)
### LET'S TRAIN ###
n_epochs = 3
print("NORMAL INIT WEIGHTS")
AdamOptimizer = torch.optim.Adam(normalmodel.parameters(), lr=0.001)
normal_trainer = Trainer(normalmodel, AdamOptimizer)
normal_trainer.run(n_epochs)
normal_trainer.save_model('./normal_model.pt')
print('')
print("XAVIER INIT WEIGHTS")
AdamOptimizer = torch.optim.Adam(xaviermodel.parameters(), lr=0.001)
xavier_trainer = Trainer(xaviermodel, AdamOptimizer)
xavier_trainer.run(n_epochs)
xavier_trainer.save_model('./xavier_model.pt')
print('')
### VISUALIZATION ###
def training_plot(metrics):
plt.figure(1)
plt.plot([m.loss for m in metrics], 'b')
plt.title('Training Loss')
plt.show()
training_plot(normal_trainer.metrics)
training_plot(xavier_trainer.metrics)
###Output
_____no_output_____
###Markdown
Using pretrained weightsIn the previous section we saw that initializations can start the training from a good spot. In addition to these schemes, you might also need to have specific methods to initialize the weights in different layers. For example, you might want to use a pretrained model like Alexnet to give your network a head start for visual recognition tasks. Let's load the pretrained Alexnet model and see how it works.
###Code
alexnet_model = models.alexnet(pretrained=True)
###Output
Downloading: "https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth" to /Users/kalpak/.cache/torch/checkpoints/alexnet-owt-4df8aa71.pth
100%|██████████| 233M/233M [01:23<00:00, 2.93MB/s]
###Markdown
Adding MomentumWe can make use of the `self.state` data-structure to maintain a copy of an accumulated gradient that we also decay at each step. Once again we use inplace operations to avoid unneccesary buffer allocation. Recall a standard update with momentum given decay rate $\mu$.$$ \begin{align}\textbf{V'} &= \mu \textbf{V} - \eta \nabla L(\textbf{W})\\\textbf{W'} &= \textbf{W} + \textbf{V'}\\\end{align}$$ Batch NormalizationBatch normalization is a relatively simple but significant improvement in training neural networks. In machine learning, *covariate shift* is a phenomenon in which the covariate distribution is non-stationary over the course of training. This is a common phenomenon in online learning. When training a neural network on a fixed dataset, there is no covariate shift (excluding sample noise from minibatch approximation), but the distribution of individual node and layer activity shifts as the network parameters are updated. As an abstraction, we can consider each node's activity to be a covariate of the following nodes in the network. Thus we can think of the non-stationarity of node (and layer) activations as a sort of *internal covariate shift*. Why is internal covariate shift a problem? Each subsequent layer has to account for a shifting distribution of its inputs. For saturating non-linearities the problem becomes even more dire, as the shift in activity will more likely place the unit output in the saturated region of the non-linearity.
###Code
class BatchNorm(nn.Module):
def __init__(self, num_features):
super(BatchNorm, self).__init__()
self.num_features = num_features
self.affine = affine
self.weight = Parameter(torch.Tensor(num_features))
self.bias = Parameter(torch.Tensor(num_features))
self.register_buffer('running_mean', torch.zeros(num_features))
self.register_buffer('running_var', torch.ones(num_features))
self.reset_parameters()
def reset_parameters(self):
self.running_mean.zero_()
self.running_var.fill_(1)
self.weight.data.uniform_()
self.bias.data.zero_()
def forward(self, x):
pass
###Output
_____no_output_____
###Markdown
OverfittingDeep neural networks contain multiple non-linear hidden layers and this makes them veryexpressive models that can learn very complicated relationships between their inputs andoutputs. With limited training data, however, many of these complicated relationshipswill be the result of sampling noise, so they will exist in the training set but not in realtest data even if it is drawn from the same distribution. This leads to overfitting and manymethods have been developed for reducing it. DropoutDropout is a regularization technique for reducing overfitting in neural networks by preventing complex co-adaptations on training data. It is a very efficient way of performing model averaging with neural networks. The term "dropout" refers to dropping out units in a neural network. Regularization (weight_decay)Weight decay specifies regularization in the neural network.During training, a regularization term is added to the network's loss to compute the backpropagation gradient. The weight decay value determines how dominant this regularization term will be in the gradient computation.As a rule of thumb, the more training examples you have, the weaker this term should be. The more parameters you have the higher this term should be.
###Code
class FashionModel_Tricks(nn.Module):
def __init__(self):
super(FashionModel_Tricks, self).__init__()
self.fc1 = nn.Linear(784, 64)
self.bnorm1 = nn.BatchNorm1d(64)
self.dp1 = nn.Dropout(p=0.2)
self.fc2 = nn.Linear(64, 32)
self.bnorm2 = nn.BatchNorm1d(32)
self.dp2 = nn.Dropout(p=0.1)
self.fc3 = nn.Linear(32, 10)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.dp1(self.bnorm1(x))
x = F.relu(self.fc2(x))
x = self.dp2(self.bnorm2(x))
x = F.log_softmax(self.fc3(x))
return x
print(FashionModel_Tricks())
### TRAIN MODELS WITH BATCHNORM AND DROPOUT ###
n_epochs = 10
model = FashionModel_Tricks()
optimizer = torch.optim.SGD(model.parameters(), lr = 0.001, momentum = 0.9, weight_decay = 0.001)
btrainer = Trainer(model, optimizer)
btrainer.run(n_epochs)
btrainer.save_model('./dropout-batchnorm_optimized_model.pt')
training_plot(btrainer.metrics)
print('')
###Output
Start Training...
###Markdown
Gradient ClippingDuring experimentation, once the gradient value grows extremely large, it causes an overflow (i.e. NaN) which is easily detectable at runtime or in a less extreme situation, the Model starts overshooting past our Minima; this issue is called the Gradient Explosion Problem.Gradient clipping will ‘clip’ the gradients or cap them to a Threshold value to prevent the gradients from getting too large.
###Code
#Gradient Clipping
# `clip_grad_norm` helps prevent the exploding gradient problem. To be used before optimizer.step()during training
torch.nn.utils.clip_grad_norm(model.parameters(), 0.25)
###Output
/Users/advaitgadhikar/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:3: UserWarning: torch.nn.utils.clip_grad_norm is now deprecated in favor of torch.nn.utils.clip_grad_norm_.
app.launch_new_instance()
###Markdown
Annealing Learning RateIn training deep networks, it is usually helpful to anneal the learning rate over time. Good intuition to have in mind is that with a high learning rate, the system contains too much kinetic energy and the parameter vector bounces around chaotically, unable to settle down into deeper, but narrower parts of the loss function. Knowing when to decay the learning rate can be tricky: Decay it slowly and you’ll be wasting computation bouncing around chaotically with little improvement for a long time. But decay it too aggressively and the system will cool too quickly, unable to reach the best position it can. One way of doing it is using step decay. Step decay schedule drops the learning rate by a factor every few epochs. The mathematical form of step decay is:$$\eta = \eta_0 * drop^{\floor ( \frac{epoch}{epochs\_drop})}$$
###Code
def step_decay(epoch):
initial_lrate = 0.1
drop = 0.5
epochs_drop = 10.0
lrate = initial_lrate * math.pow(drop,
math.floor((1+epoch)/epochs_drop))
return lrate
###Output
_____no_output_____
|
tsa/jose/UDEMY_TSA_FINAL (1)/02-Pandas/08-Pandas-Exercises-Solutions.ipynb
|
###Markdown
______Copyright Pierian DataFor more information, visit us at www.pieriandata.com Pandas ExercisesTime to test your new pandas skills! Use the population_by_county.csv file in the Data folder to complete the tasks in bold below!NOTE: ALL TASKS CAN BE DONE IN ONE LINE OF PANDAS CODE. GET STUCK? NO PROBLEM! CHECK OUT THE SOLUTIONS LECTURE!IMPORTANT NOTE! Make sure you don't run the cells directly above the example output shown, otherwise you will end up writing over the example output! 1. Import pandas and read in the population_by_county.csv file into a dataframe called pop.
###Code
import pandas as pd
pop = pd.read_csv('../Data/population_by_county.csv')
###Output
_____no_output_____
###Markdown
2. Show the head of the dataframe
###Code
# CODE HERE
# DON'T WRITE HERE
pop.head()
###Output
_____no_output_____
###Markdown
3. What are the column names?
###Code
# DON'T WRITE HERE
pop.columns
###Output
_____no_output_____
###Markdown
4. How many States are represented in this data set? Note: the data includes the District of Columbia
###Code
# DON'T WRITE HERE
pop['State'].nunique()
###Output
_____no_output_____
###Markdown
5. Get a list or array of all the states in the data set.
###Code
# DON'T WRITE HERE
pop['State'].unique()
###Output
_____no_output_____
###Markdown
6. What are the five most common County names in the U.S.?
###Code
# DON'T WRITE HERE
pop['County'].value_counts().head()
###Output
_____no_output_____
###Markdown
7. What are the top 5 most populated Counties according to the 2010 Census?
###Code
# DON'T WRITE HERE
pop.sort_values('2010Census', ascending=False).head()
###Output
_____no_output_____
###Markdown
8. What are the top 5 most populated States according to the 2010 Census?
###Code
# DON'T WRITE HERE
pop.groupby('State').sum().sort_values('2010Census', ascending=False).head()
###Output
_____no_output_____
###Markdown
9. How many Counties have 2010 populations greater than 1 million?
###Code
# DON'T WRITE HERE
# pop['2010Census'].apply(lambda qty: qty>1000000).value_counts()
sum(pop['2010Census']>1000000)
###Output
_____no_output_____
###Markdown
10. How many Counties don't have the word "County" in their name?
###Code
# DON'T WRITE HERE
# pop['County'].apply(lambda name: 'County' not in name).value_counts()
sum(pop['County'].apply(lambda name: 'County' not in name))
###Output
_____no_output_____
###Markdown
11. Add a column that calculates the percent change between the 2010 Census and the 2017 Population Estimate
###Code
# CODE HERE
# USE THIS TO SHOW THE RESULT
pop.head()
# DON'T WRITE HERE
pop['PercentChange'] = 100*(pop['2017PopEstimate']-pop['2010Census'])/pop['2010Census']
pop.head()
###Output
_____no_output_____
###Markdown
Bonus: What States have the highest estimated percent change between the 2010 Census and the 2017 Population Estimate?This will take several lines of code, as it requires a recalculation of PercentChange.
###Code
# CODE HERE
# DON'T WRITE HERE
pop2 = pd.DataFrame(pop.groupby('State').sum())
pop2['PercentChange'] = 100*(pop2['2017PopEstimate']-pop2['2010Census'])/pop2['2010Census']
pop2.sort_values('PercentChange', ascending=False).head()
###Output
_____no_output_____
|
code/8_Validate_Morphospace.ipynb
|
###Markdown
Validate the MorphospaceUse the perturbation vectors to see how the individual drug perturbations affected the morphological space of MCF10A cells 1.) Check cellular intrinsic fluctuation (DMSO morphological fluctuation) 2.) Show Morphological space (heatmap and PCA) 3.) Check replicate similarity 4.) Check similarity for same MOAs 5.) Check similarity with same ATC 6.) Check similarity in dependence of PPI distance 7.) Make summary plot
###Code
import numpy as np
from scipy.spatial import distance
from matplotlib import pyplot as plt
import scipy.stats as stats
from sklearn.decomposition import PCA
import random
import networkx as nx
from decimal import Decimal
import seaborn as sns
cm = plt.cm.get_cmap('tab20')
#%matplotlib inline
# Effect size
def cohen_d(x, y):
nx = len(x)
ny = len(y)
dof = nx + ny - 2
return (np.mean(x) - np.mean(y)) / np.sqrt(
((nx - 1) * np.std(x, ddof=1) ** 2 + (ny - 1) * np.std(y, ddof=1) ** 2) / dof)
###Output
_____no_output_____
###Markdown
1. Check cellular intrinsic fluctuation Check DMSO treated cells to see how much per pure chance cells can change morphologically e.g. technical errors
###Code
# open file that contains feature vectors
path = '../data/Validate_Morphospace/All_Vectors_Combined.csv'
fp = open(path)
#fp.next()
features = fp.readline().split(',')[1:]
numfeatures = len(features)
# -----------------------------------------------------------
# extract vector lengths (norm) from file and collect them in a list (AllVectorvalues)
# extract DMSO vectors for the two batches separate (DMSO_Vectors_Batch1,DMSO_Vectors_Batch2)
# extract actual all individual vectors (Vector_Dictionary); contains all vectors i.e. singles, combinations, DMSO, PosCon
# extract all single vectors (drug_replicates), with a list of the individual replicates (7/6 per batch1/2)
# get a list of all perturbations i.e. singles and combinations but not DMSO/PosCon (all_Perturbations)
# -----------------------------------------------------------
#lists as described above
AllVectorvalues = []
DMSO_Wells = []
DMSO_Vectors_Batch1 = []
DMSO_Vectors_Batch2 = []
Vector_Dictionary = {}
drug_replicates = {}
all_Perturbations = []
#go through the vector file and assign all vectors correctly
for line in fp:
tmp = line.strip().split(',')
drug1, drug2 = tmp[0].split('_')[0].split('|')
plate = tmp[0].split('_')[2]
values = list(np.float_(tmp[1:]))
vector_size = np.linalg.norm(values)
#split here the DMSO wells into the two batches
if drug1 == 'DMSO':
DMSO_Wells.append(vector_size)
if int(plate) < 1315065:
DMSO_Vectors_Batch1.append(values)
else:
DMSO_Vectors_Batch2.append(values)
#add vector length (norm) and actual vectors
AllVectorvalues.append(vector_size)
Vector_Dictionary[tmp[0]] = values
#keep individual drug replicates
if drug2 == 'DMSO':
if drug_replicates.has_key(drug1):
drug_replicates[drug1].append(values)
else:
drug_replicates[drug1] = [values]
#keep the overall perturbations
if drug1 != 'DMSO' and drug1 != 'PosCon':
all_Perturbations.append(values)
fp.close()
###Output
_____no_output_____
###Markdown
2. Show morphological space a. Make heatmap of all treatments
###Code
#extract all the drug perturbations
vectorsToPlot = []
for key in Vector_Dictionary.keys():
treatment = key.split('_')[0].split('|')
#vector_size = np.linalg.norm(Vector_Dictionary[key])
#do not include DMSO or PosCon o
if 'DMSO' in treatment[0] or 'PosCon' in treatment[0]:
continue
vectorsToPlot.append(Vector_Dictionary[key])
#create an SNS clustermap plot with the drug perturbations
print len(vectorsToPlot)
sns.clustermap(vectorsToPlot, cmap='RdBu_r', metric='euclidean', method ='average')
plt.savefig('../results/Validate_Morphospace/AllPerturationVectorsClusterMap_OnlySignificantEuclidean.pdf', dpi = 1200)
#plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
b. Load significance for single perturbationssee of 9_Calculate_Interactions. This is used to differentiate between significantly perturbed single drugs and non significant perturbers
###Code
#set a significance threshold for
perturbaion_significnace = 7
#save the max. mahalanobis distance over the two batches
drug_perturbation_significances = {}
number_valid_drugs = 0
#Open file containing the mahalanobis distance of the single perturbations
fp = open('../data/Validate_Morphospace/Singles_Significance.csv')
fp.next()
for line in fp:
tmp = line.strip().split(',')
#print tmp
values = []
if tmp[1] != "No_Cells":
value1 = float(tmp[1])
values.append(value1)
if tmp[3] != "No_Cells":
value1 = float(tmp[3])
values.append(value1)
if len(values) > 0:
drug_perturbation_significances[tmp[0]] = max(values)
number_valid_drugs += 1
else:
drug_perturbation_significances[tmp[0]] = 0
print 'Number of drugs (sucessfully transfered in screen): %d' %number_valid_drugs
print 'Number of Significant drugs: %d' %len([x for x in drug_perturbation_significances if drug_perturbation_significances[x] > perturbaion_significnace])
print 'Percent: %.3f' %(len([x for x in drug_perturbation_significances if drug_perturbation_significances[x] > perturbaion_significnace])/float(number_valid_drugs))
###Output
_____no_output_____
###Markdown
c. Make PCA of drug treatmentsGet the 11 treatments (single) for each of the 267 drug treatments and create a mean vector (column means)
###Code
#create the combined single drug vectors (over all batches);
# X contains the vectors
# Y contains the drug names
# col contains a color for singificant vectors (mahalanobis distance > 7)
X = []
Y = []
col = []
#Use the previously loaded significance scores to differentiate (i) strong perturbers from (ii) weak perturbers.
#Create the mean vectors over all 11 replicates
col_count = 0
Drug_To_Color = {}
Combined_Drug_Vectors = {}
significant_drugs = 0
for drug in drug_perturbation_significances.keys():
val = np.array(drug_replicates[drug])
#create mean vector
tmp = []
for i in range(0,numfeatures):
tmp.append(np.mean(val[:,i]))
X.append(tmp)
Y.append(drug)
Combined_Drug_Vectors[drug] = tmp
#check if drug is a strong perturber (add color)
if drug_perturbation_significances[drug] > perturbaion_significnace:
if drug not in Drug_To_Color.keys():
significant_drugs += 1
color = cm(col_count)
Drug_To_Color[drug] = color
col_count += 1
if col_count > 18:
col_count = 0
else:
color = Drug_To_Color[drug]
#else color = grey
else:
color = (0.8, 0.8, 0.8, 1)
col.append(color)
#Remove RANDOMLY some combination points to make the plot less sparse.
# This is important as that the vast majority of all combinations still does nothing --> by far biggest density around DMSO/0 perturbations
# By removing some of the lesser significant points, the focus is put more on the significant
combination_vectors = []
for key in Vector_Dictionary.keys():
treatment = key.split('_')[0].split('|')
if 'CLOUD' in treatment[1]:
vector_size = np.linalg.norm(Vector_Dictionary[key])
if vector_size < 0.5 and random.randint(0,500)< 250:
continue
elif vector_size < 1.2 and random.randint(0,100)< 92:
continue
elif vector_size < 1.2 and random.randint(0,100)< 15:
continue
combination_vectors.append(Vector_Dictionary[key])
'''
Make PCA plot if morphological space
'''
print 'Number of drugs: %d' %number_valid_drugs
print 'Number of significant drugs: %d' %significant_drugs
#Make a PCA plot for all SINGLE combined vectors
pca = PCA(n_components=2)
#pca.fit(X)
pca.fit(combination_vectors)
#transform the combiantions to this space
CombArea = pca.transform(combination_vectors)
#Make a scatter and KDE plot
fig, ax = plt.subplots(figsize=(10,10))
#sns.kdeplot(CombArea[:, 0], CombArea[:, 1], n_levels=10,cmap="Blues",alpha=0.6,gridsize = 100,shade=True, shade_lowest=True, bw=0.31, kernel='gau' )
sns.kdeplot(CombArea[:, 0], CombArea[:, 1], n_levels=10,cmap="Blues",alpha=0.6,gridsize = 100,shade=True, shade_lowest=True, bw=0.41, kernel='gau' )
X_Transformed = pca.transform(X)
ax.scatter(CombArea[:, 0], CombArea[:, 1], alpha=0.2,c='grey', s=10)
ax.scatter(X_Transformed[:, 0], X_Transformed[:, 1], alpha=0.6,c=col, s=100)
ax.scatter(X_Transformed[:, 0], X_Transformed[:, 1], alpha=1,c=col, s=100)
#Annotate the points
#for i, txt in enumerate(Y):
# ax.annotate(txt, (X_Transformed[:, 0][i], X_Transformed[:, 1][i]), size=8)
#print (X[:, 0][i], X[:, 1][i])
#print txt
#print '--'
print pca.explained_variance_ratio_
ax.set_xlabel(str(pca.explained_variance_ratio_[0]))
ax.set_ylabel(str(pca.explained_variance_ratio_[1]))
ax.set_xlabel('PC 1')
ax.set_ylabel('PC 2')
ax.set_xlim([-2,3.5])
ax.set_ylim([-2.5,2.5])
#plt.show()
#plt.savefig('../results/Validate_Morphospace/Combined_Vectors_LabelsKDE.pdf')
plt.savefig('../results/Validate_Morphospace/Combined_Vectors_NoLabelsKDE.pdf')
plt.close()
###Output
_____no_output_____
###Markdown
3. Check replicate similarityCheck similarity (cosine) between replicates of the same drugInclude ONLY significant drugs, as non significant drugs correspond to random fluctations around 0. Cosine similarity would still only look on the angle and hence see big differences between this small insignificant random fluctations. Significant drugs for these calculations are simply given by the DMSO fluctuations --> 2x std away from the DMSO mean should be sufficient for including only meaningful drug replicates.
###Code
#Get mean vector lenght and std of all DMSO vectors
mean = np.mean(DMSO_Wells)
std = np.std(DMSO_Wells)
threshold = mean + 2*std
print 'Threshold: %.2f' %threshold
#calculate the similarity for all drug that are significant (here threshold as there is no mean taken)
sign_single_Drugs = {}
sign_unique_Drugs = set()
for key in Vector_Dictionary.keys():
drug1, drug2 = key.split('_')[0].split('|')
plate = int(key.split('_')[2])
values = Vector_Dictionary[key]
#Include only replicates that are at least 2xstd away from the DMSO mean
if drug2 == 'DMSO' and np.linalg.norm(values) > threshold:
sign_single_Drugs[drug1+'_'+str(plate)] = values
sign_unique_Drugs.add(drug1)
print 'Number of significant replicates: %d' %len(sign_single_Drugs)
print 'Number of significant unique drugs: %d' %len(sign_unique_Drugs)
#go through all significant replicates and create a plot indicating the similarity for same/different drug perturbations
'''
same/not_same (the two lists that do not distinguish between the two batches --> taken for final plot)
'''
#Split the same/not same between the two batches as well as combined
not_same = []
same = []
same_plates_1 = []
not_same_plates_1 = []
same_plates_2 = []
not_same_plates_2 = []
for key1,value1 in sign_single_Drugs.iteritems():
plate1 = int(key1.split('_')[1])
for key2, value2 in sign_single_Drugs.iteritems():
plate2 = int(key2.split('_')[1])
if key1 > key2:
sim = 1-distance.cosine(value1, value2) # calculate cosine similarity between two vectors
#Batch1
if (plate1 < 1315065 and plate2 < 1315065):
if key1[0:8] == key2[0:8]:
same_plates_1.append(sim)
else:
not_same_plates_1.append(sim)
#Batch2
if (plate1 >= 1315065 and plate2 >= 1315065):
if key1[0:8] == key2[0:8]:
same_plates_2.append(sim)
else:
not_same_plates_2.append(sim)
#Pooled batch1 and batch2
if key1[0:8] == key2[0:8]:
same.append(sim)
else:
not_same.append(sim)
replicate_same = list(same)
replicate_NotSame = list(not_same)
#Cohen's D > 0.8 is considered to be a large effect already (see http://staff.bath.ac.uk/pssiw/stats2/page2/page14/page14.html)
# Maybe use foldchange to show how much more similar same drug are
print 'Pval Batch1: %.2E (Cohens D: %.2f)' %(stats.mannwhitneyu(same_plates_1,not_same_plates_1)[1],cohen_d(same_plates_1,not_same_plates_1))
print 'Pval Batch2: %.2E (Cohens D: %.2f)' %(stats.mannwhitneyu(same_plates_2,not_same_plates_2)[1],cohen_d(same_plates_2,not_same_plates_2))
print 'Pval All: %.2E (Cohens D: %.2f)' %(stats.mannwhitneyu(same,not_same)[1],cohen_d(same,not_same))
#plot output
plt.title('Similarity between replicates')
plt.boxplot([same_plates_1,not_same_plates_1,same_plates_2,not_same_plates_2,same,not_same])
plt.xticks(range(1,7),['Batch1_Same','Batch1_Different','Batch2_Same','Batch2_Different','All_Same','All_Different'], rotation=15, fontsize= 6)
plt.xlabel('Perturbation Group')
plt.ylabel('Cosine Similarity')
#plt.show()
plt.savefig('../results/Validate_Morphospace/Replicate_CosineSimilarity_Boxplot.pdf')
plt.close()
#calculate error bars
errors=[1.96 * (np.std(x) / np.sqrt(float(len(x)))) for x in [same_plates_1,not_same_plates_1,same_plates_2,not_same_plates_2,same,not_same]]
# set width of bar
barWidth = 0.4
#Include measurements (same replicates)
measurments = [same_plates_1,same_plates_2,same]
measurments_errors = [errors[0],errors[2],errors[4]]
#Include controls (randomly picked pairs)
controls = [not_same_plates_1,not_same_plates_1,not_same]
controls_errors = [errors[1],errors[3],errors[5]]
# Set position of bar on X axis
r1 = np.arange(len(measurments))
r2 = [x + barWidth for x in r1]
#Plot bar chart
plt.title('Replicate Similarity')
plt.bar(r1,[np.mean(x) for x in measurments],width=barWidth, yerr=measurments_errors,alpha=0.5, ecolor='black', capsize=10,color='#40B9D4')
plt.bar(r2,[np.mean(x) for x in controls],width=barWidth, yerr=controls_errors,alpha=0.5, ecolor='black', capsize=10,color='grey')
plt.legend(['Same','Different'])
plt.xticks(range(0,3),['Batch1','Batch2','Combined'], rotation=15, fontsize= 6)
plt.xlabel('Batch')
plt.ylabel('Cosine Similarity')
plt.savefig('../results/Validate_Morphospace/Replicate_CosineSimilarity_Barplot.pdf')
#plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
4. Check similarity for same MOAs Same as before (3.) but bin vectors based on the mechanism of action
###Code
CLOUD_To_MOA = {}
fp = open('../data/Validate_Morphospace/CLOUD_to_MechanismOfAction.csv','r')
fp.next()
for line in fp:
tmp = line.strip().split(',')
CLOUD_To_MOA[tmp[0]] = tmp[4]
#go through all significant replicates and create a plot indicating the similarity for same/different drug perturbations regarding their mechansism of action
'''
same/not_same (the two lists that do not distinguish between the two batches --> taken for final plot), same means same mechanism of action/ not_same for different
'''
not_same = []
same = []
same_plates_1 = []
not_same_plates_1 = []
same_plates_2 = []
not_same_plates_2 = []
for key1,value1 in sign_single_Drugs.iteritems():
plate1 = int(key1.split('_')[1])
for key2, value2 in sign_single_Drugs.iteritems():
plate2 = int(key2.split('_')[1])
if key1 > key2:
sim = 1-distance.cosine(value1, value2) # calculate cosine similarity between two vectors
#Batch1
if (plate1 < 1315065 and plate2 < 1315065):
if CLOUD_To_MOA[key1[0:8]] == CLOUD_To_MOA[key2[0:8]]:
same_plates_1.append(sim)
else:
not_same_plates_1.append(sim)
#Batch2
if (plate1 >= 1315065 and plate2 >= 1315065):
if CLOUD_To_MOA[key1[0:8]] == CLOUD_To_MOA[key2[0:8]]:
same_plates_2.append(sim)
else:
not_same_plates_2.append(sim)
#Pooled batch1 and batch2
if CLOUD_To_MOA[key1[0:8]] == CLOUD_To_MOA[key2[0:8]]:
same.append(sim)
else:
not_same.append(sim)
MoA_same = list(same)
MoA_Notsame = list(not_same)
#Cohen's D > 0.8 is considered to be a large effect already (see http://staff.bath.ac.uk/pssiw/stats2/page2/page14/page14.html)
# Maybe use foldchange to show how much more similar same drug are
print 'Pval Batch1: %.2E (Cohens D: %.2f)' %(stats.mannwhitneyu(same_plates_1,not_same_plates_1)[1],cohen_d(same_plates_1,not_same_plates_1))
print 'Pval Batch2: %.2E (Cohens D: %.2f)' %(stats.mannwhitneyu(same_plates_2,not_same_plates_2)[1],cohen_d(same_plates_2,not_same_plates_2))
print 'Pval All: %.2E (Cohens D: %.2f)' %(stats.mannwhitneyu(same,not_same)[1],cohen_d(same,not_same))
plt.title('Similarity between same mechanism of action (MOAs)')
plt.boxplot([same_plates_1,not_same_plates_1,same_plates_2,not_same_plates_2,same,not_same])
plt.xticks(range(1,7),['Batch1_Same','Batch1_Different','Batch2_Same','Batch2_Different','All_Same','All_Different'], rotation=15, fontsize= 6)
plt.xlabel('Perturbation Group')
plt.ylabel('Cosine Similarity')
#plt.show()
plt.savefig('../results/Validate_Morphospace/MOAs_CosineSimilarity_Boxplot.pdf')
plt.close()
errors=[1.96 * (np.std(x) / np.sqrt(float(len(x)))) for x in [same_plates_1,not_same_plates_1,same_plates_2,not_same_plates_2,same,not_same]]
# set width of bar
barWidth = 0.4
measurments = [same_plates_1,same_plates_2,same]
measurments_errors = [errors[0],errors[2],errors[4]]
controls = [not_same_plates_1,not_same_plates_1,not_same]
controls_errors = [errors[1],errors[3],errors[5]]
# Set position of bar on X axis
r1 = np.arange(len(measurments))
r2 = [x + barWidth for x in r1]
plt.title('MOA similarity')
plt.bar(r1,[np.mean(x) for x in measurments],width=barWidth, yerr=measurments_errors,alpha=0.5, ecolor='black', capsize=10,color='#40B9D4')
plt.bar(r2,[np.mean(x) for x in controls],width=barWidth, yerr=controls_errors,alpha=0.5, ecolor='black', capsize=10,color='grey')
plt.legend(['Same','Different'])
plt.xticks(range(0,3),['Batch1','Batch2','Combined'], rotation=15, fontsize= 6)
plt.xlabel('Batch')
plt.ylabel('Cosine Similarity')
plt.savefig('../results/Validate_Morphospace/MOAs_CosineSimilarity_Barplot.pdf')
#plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
5. Check similarity with same ATC Same as before (3.) but bin vectors based on ATC classification
###Code
CLOUD_To_ATC = {}
fp = open('../data/Validate_Morphospace/CLOUD_to_ATC.csv','r')
fp.next()
for line in fp:
tmp = line.strip().split(',')
CLOUD_To_ATC[tmp[0]] = set(tmp[2].split(';'))
#go through all significant replicates and create a plot indicating the similarity for same/different drug perturbations regarding their ATC classification
'''
same/not_same (the two lists that do not distinguish between the two batches --> taken for final plot), same means same ATC/ not_same for different
'''
not_same = []
same = []
same_plates_1 = []
not_same_plates_1 = []
same_plates_2 = []
not_same_plates_2 = []
for key1,value1 in sign_single_Drugs.iteritems():
plate1 = int(key1.split('_')[1])
for key2, value2 in sign_single_Drugs.iteritems():
plate2 = int(key2.split('_')[1])
if key1 > key2:
sim = 1-distance.cosine(value1, value2) # calculate cosine similarity between two vectors
#Batch1
if (plate1 < 1315065 and plate2 < 1315065):
if len(CLOUD_To_ATC[key1[0:8]].intersection(CLOUD_To_ATC[key2[0:8]])) > 0:
same_plates_1.append(sim)
else:
not_same_plates_1.append(sim)
#Batch2
if (plate1 >= 1315065 and plate2 >= 1315065):
if len(CLOUD_To_ATC[key1[0:8]].intersection(CLOUD_To_ATC[key2[0:8]])) > 0:
same_plates_2.append(sim)
else:
not_same_plates_2.append(sim)
#Pooled batch1 and batch2
if len(CLOUD_To_ATC[key1[0:8]].intersection(CLOUD_To_ATC[key2[0:8]])) > 0:
same.append(sim)
else:
not_same.append(sim)
ATC_same = list(same)
ATC_Notsame = list(not_same)
#Cohen's D > 0.8 is considered to be a large effect already (see http://staff.bath.ac.uk/pssiw/stats2/page2/page14/page14.html)
# Maybe use foldchange to show how much more similar same drug are
print 'Pval Batch1: %.2E (Cohens D: %.2f)' %(stats.mannwhitneyu(same_plates_1,not_same_plates_1)[1],cohen_d(same_plates_1,not_same_plates_1))
print 'Pval Batch2: %.2E (Cohens D: %.2f)' %(stats.mannwhitneyu(same_plates_2,not_same_plates_2)[1],cohen_d(same_plates_2,not_same_plates_2))
print 'Pval All: %.2E (Cohens D: %.2f)' %(stats.mannwhitneyu(same,not_same)[1],cohen_d(same,not_same))
plt.title('Similarity between same ATC code')
plt.boxplot([same_plates_1,not_same_plates_1,same_plates_2,not_same_plates_2,same,not_same])
plt.xticks(range(1,7),['Batch1_Same','Batch1_Different','Batch2_Same','Batch2_Different','All_Same','All_Different'], rotation=15, fontsize= 6)
plt.xlabel('Perturbation Group')
plt.ylabel('Cosine Similarity')
#plt.show()
plt.savefig('../results/Validate_Morphospace/ATC_CosineSimilarity_Boxplot.pdf')
plt.close()
errors=[1.96 * (np.std(x) / np.sqrt(float(len(x)))) for x in [same_plates_1,not_same_plates_1,same_plates_2,not_same_plates_2,same,not_same]]
# set width of bar
barWidth = 0.4
measurments = [same_plates_1,same_plates_2,same]
measurments_errors = [errors[0],errors[2],errors[4]]
controls = [not_same_plates_1,not_same_plates_1,not_same]
controls_errors = [errors[1],errors[3],errors[5]]
# Set position of bar on X axis
r1 = np.arange(len(measurments))
r2 = [x + barWidth for x in r1]
plt.title('ATC similarity')
plt.bar(r1,[np.mean(x) for x in measurments],width=barWidth, yerr=measurments_errors,alpha=0.5, ecolor='black', capsize=10,color='#40B9D4')
plt.bar(r2,[np.mean(x) for x in controls],width=barWidth, yerr=controls_errors,alpha=0.5, ecolor='black', capsize=10,color='grey')
plt.legend(['Same','Different'])
plt.xticks(range(0,3),['Batch1','Batch2','Combined'], rotation=15, fontsize= 6)
plt.xlabel('Batch')
plt.ylabel('Cosine Similarity')
plt.savefig('../results/Validate_Morphospace/ATC_CosineSimilarity_Barplot.pdf')
#plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
6. Check similarity in dependence of PPI distance Check if two drugs are more similar if they are more closer/distant on the PPI.
###Code
#Extract the PPI distances of the targets only (remove cytochromes, transporters, carriers etc.)
PPI_Distances = {}
fp = open('../data/Validate_Morphospace/Separation_TargetsOnly.csv','r')
fp.next()
for line in fp:
tmp = line.strip().split(',')
PPI_Distances[tmp[0]+','+tmp[1]] = float(tmp[4])
#Create bins
bins_batch1 = {}
bins_batch2 = {}
bins_all = {}
#Choose meaningfull PPI distances (e.g. larger than 3.5 is almost nothing, while smaller than 0.7 is also already very small)
bins = [0.7,1.4,2.1,2.8,3.5]
for b in bins:
bins_batch1[b] = []
bins_batch2[b] = []
bins_all[b] = []
#go through all significant replicates and create a plot indicating the similarity for same/different drug perturbations regarding their PPI distance
'''
all_sims: contain the similarities
all_dist: contain the PPI distances
'''
all_sims = []
all_dist = []
for key1,value1 in sign_single_Drugs.iteritems():
drug1 = key1.split('_')[0]
plate1 = int(key1.split('_')[1])
for key2, value2 in sign_single_Drugs.iteritems():
drug2 = key2.split('_')[0]
plate2 = int(key2.split('_')[1])
if drug1 > drug2:
if PPI_Distances.has_key(drug1+','+drug2) == False:
continue
sim = 1-distance.cosine(value1, value2) # calculate cosine similarity between two vectors
dist = PPI_Distances[drug1+','+drug2]
all_sims.append(sim)
all_dist.append(dist)
if (plate1 < 1315065 and plate2 < 1315065):
if dist< 0.7:
bins_batch1[0.7].append(sim)
elif dist < 1.4:
bins_batch1[1.4].append(sim)
elif dist < 2.1:
bins_batch1[2.1].append(sim)
elif dist < 2.8:
bins_batch1[2.8].append(sim)
else:
bins_batch1[3.5].append(sim)
if (plate1 >= 1315065 and plate2 >= 1315065):
if dist < 0.7:
bins_batch2[0.7].append(sim)
elif dist < 1.4:
bins_batch2[1.4].append(sim)
elif dist < 2.1:
bins_batch2[2.1].append(sim)
elif dist < 2.8:
bins_batch2[2.8].append(sim)
else:
bins_batch2[3.5].append(sim)
if dist < 0.7:
bins_all[0.7].append(sim)
elif dist < 1.4:
bins_all[1.4].append(sim)
elif dist < 2.1:
bins_all[2.1].append(sim)
elif dist < 2.8:
bins_all[2.8].append(sim)
else:
bins_all[3.5].append(sim)
'''
Plot batch specific results (Boxplots)
'''
plt.title('PPI distance and cosine similarity (Batch1)')
plt.boxplot([bins_batch1[x] for x in bins])
plt.xticks(range(1,6),['0.7','1.4','2.1','2.8','3.5'], rotation=45)
plt.xlabel('PPI Distance')
plt.ylabel('Cosine Similarity')
#plt.show()
plt.savefig('../results/Validate_Morphospace/PPIDistance_CosineSimilarity_Batch1_Boxplot.pdf')
plt.close()
plt.title('PPI distance and cosine similarity (Batch2)')
plt.boxplot([bins_batch2[x] for x in bins])
plt.xticks(range(1,6),['0.7','1.4','2.1','2.8','3.5'], rotation=45)
plt.xlabel('PPI Distance')
plt.ylabel('Cosine Similarity')
#plt.show()
plt.savefig('../results/Validate_Morphospace/PPIDistance_CosineSimilarity_Batch2_Boxplot.pdf')
plt.close()
plt.title('PPI distance and cosine similarity (All)')
plt.boxplot([bins_all[x] for x in bins])
plt.xticks(range(1,6),['0.7','1.4','2.1','2.8','3.5'], rotation=45)
plt.xlabel('PPI Distance')
plt.ylabel('Cosine Similarity')
plt.savefig('../results/Validate_Morphospace/PPIDistance_CosineSimilarity_All_Boxplot.pdf')
#plt.show()
plt.close()
'''
Plot batch specific results (Barplots)
'''
errors=[1.96 * (np.std(x) / np.sqrt(float(len(x)))) for x in [bins_batch1[x] for x in bins]]
plt.title('PPI distance and cosine similarity (Batch1)')
plt.bar(range(0,5),[np.mean(bins_batch1[x]) for x in bins],yerr=errors,align='center', alpha=0.5, ecolor='black', capsize=10,color='#40B9D4', zorder=2)
plt.xticks(range(0,5),['0.7','1.4','2.1','2.8','3.5'], rotation=45)
plt.xlabel('PPI Distance')
plt.ylabel('Cosine Similarity')
#plt.show()
plt.savefig('../results/Validate_Morphospace/PPIDistance_CosineSimilarity_Batch1_Barplot.pdf')
plt.close()
errors=[1.96 * (np.std(x) / np.sqrt(float(len(x)))) for x in [bins_batch2[x] for x in bins]]
plt.title('PPI distance and cosine similarity (Batch2)')
plt.bar(range(0,5),[np.mean(bins_batch2[x]) for x in bins],yerr=errors,align='center', alpha=0.5, ecolor='black', capsize=10,color='#40B9D4', zorder=2)
plt.xticks(range(0,5),['0.7','1.4','2.1','2.8','3.5'], rotation=45)
plt.xlabel('PPI Distance')
plt.ylabel('Cosine Similarity')
plt.savefig('../results/Validate_Morphospace/PPIDistance_CosineSimilarity_Batch2_Barplot.pdf')
#plt.show()
plt.close()
errors=[1.96 * (np.std(x) / np.sqrt(float(len(x)))) for x in [bins_all[x] for x in bins]]
plt.title('PPI distance and cosine similarity (All)')
plt.bar(range(0,5),[np.mean(bins_all[x]) for x in bins],yerr=errors,align='center', alpha=0.5, ecolor='black', capsize=10,color='#40B9D4', zorder=2)
plt.xticks(range(0,5),['0.7','1.4','2.1','2.8','3.5'], rotation=45)
plt.xlabel('PPI Distance')
plt.ylabel('Cosine Similarity')
plt.savefig('../results/Validate_Morphospace/PPIDistance_CosineSimilarity_All_Allplot.pdf')
#plt.show()
plt.close()
print [np.mean(bins_all[x]) for x in bins]
print 'Foldchange: %.2f' %(float( [np.mean(bins_all[x]) for x in bins][0])/ [np.mean(bins_all[x]) for x in bins][4])
#Make a 2D heatmap
plt.hist2d(all_dist, all_sims, bins=(100, 100), cmap=plt.cm.inferno)
plt.xlabel('PPI Distance')
plt.ylabel('Cosine Similarity')
plt.colorbar()
#plt.show()
plt.savefig('../results/Validate_Morphospace/PPIDistance_CosineSimilarity_DensityMap.pdf')
plt.close()
###Output
_____no_output_____
###Markdown
7. Make summary plotsCreate the final plot that summarizes all results from Replicats, MOA and ATC. Use the pooled results
###Code
#include all different measurements (i) replicates, (ii) MOA, (iii) ATC
all_bars = [replicate_same,replicate_NotSame,MoA_same,MoA_Notsame,ATC_same,ATC_Notsame]
#calculate error bars
errors=[1.96 * (np.std(x) / np.sqrt(float(len(x)))) for x in all_bars]
# set width of bar
barWidth = 0.4
#Define measurments and controls
measurments = [all_bars[0],all_bars[2],all_bars[4]]
measurments_errors = [errors[0],errors[2],errors[4]]
controls = [all_bars[1],all_bars[3],all_bars[5]]
controls_errors = [errors[1],errors[3],errors[5]]
# Set position of bar on X axis
r1 = np.arange(len(measurments))
r2 = [x + barWidth for x in r1]
#Plot bar chart
plt.title('Summary both batches combined')
plt.bar(r1,[np.mean(x) for x in measurments],width=barWidth, yerr=measurments_errors,alpha=0.5, ecolor='black', capsize=10,color='#40B9D4')
plt.bar(r2,[np.mean(x) for x in controls],width=barWidth, yerr=controls_errors,alpha=0.5, ecolor='black', capsize=10,color='grey')
plt.legend(['Same','Different'])
#plt.xticks(range(0,6),['Rep_Same','Rep_Different','MoA_Same','MoA_Different','ATC_Same','ATC_Different'], rotation=15, fontsize= 6)
plt.xlabel('PPI Distance')
plt.ylabel('Cosine Similarity')
plt.savefig('../results/Validate_Morphospace/All_Summary.pdf')
#plt.show()
plt.close()
###Output
_____no_output_____
|
Introduction Python.ipynb
|
###Markdown
Introduction to Data Structure in Python Variables and Assignment
###Code
x = 10
###Output
_____no_output_____
###Markdown

###Code
x = 'hello'
###Output
_____no_output_____
###Markdown

###Code
x = 2.7
###Output
_____no_output_____
###Markdown

###Code
y = 1.0
z = x + y
print(z)
###Output
_____no_output_____
###Markdown
Lists I want a list of numbers
###Code
items = [2.7, 3.1, 69.1]
###Output
_____no_output_____
###Markdown
 I want to print the second number in the list
###Code
print(items[1])
###Output
_____no_output_____
###Markdown
I want to print the length of the list
###Code
print len(items)
###Output
_____no_output_____
###Markdown
I want to print all the numbers in the list
###Code
for item in items:
print item
###Output
_____no_output_____
###Markdown
Introduction to Pandas Data FrameWhat is a data frame?2-d matrix 
###Code
import pandas as pd
df = pd.read_csv("sample.csv")
df
df.columns = ["area", "sales2014", "profit", "sales2016"]
df.dtypes
###Output
_____no_output_____
###Markdown
How many rows & columns does the dataframe have?
###Code
?df.shape
###Output
_____no_output_____
###Markdown
I want to see the top 5 rows of the dataframe
###Code
df.shape
df.head
df1 = df.head(1)
df1
###Output
_____no_output_____
###Markdown
I want to see the bottom 2 rows only
###Code
df.tail(2)
df.index
###Output
_____no_output_____
###Markdown
What are the column names?
###Code
df.columns
###Output
_____no_output_____
###Markdown
Show me the values of the dataframe. (exclude the column and index information)
###Code
df.values
###Output
_____no_output_____
###Markdown
Can I quickly get a sense of how the data is in the dataframe?Information like min, max, mean etc
###Code
df.sales.describe()
###Output
_____no_output_____
###Markdown
I want to sort the dataframe based on sales in descending order
###Code
df.sort_values(by=['sales2014', "profit"], ascending=False)
###Output
_____no_output_____
###Markdown
I want to do the same, but based on profit column
###Code
df.sort_values(by='profit', ascending=False)
###Output
_____no_output_____
###Markdown
I want to find the area with least profit & sales
###Code
df.sort_values(by=['sales','profit'], ascending=True)
###Output
_____no_output_____
###Markdown
I want to view the sales alone
###Code
df.sales
###Output
_____no_output_____
###Markdown
I want to view just sales & profit columns, not the area names
###Code
df.loc[2:4, ['sales2014', 'profit']]
###Output
_____no_output_____
###Markdown
I want the third row of the dataframe
###Code
df.loc[2, :]
###Output
_____no_output_____
###Markdown
I want the third row, sales & profit columns
###Code
df.loc[2, ['sales', 'profit']]
###Output
_____no_output_____
###Markdown
I want rows between index 2 and 3, and column 2 only
###Code
df.iloc[2:3, 2]
###Output
_____no_output_____
|
NoneLinearRegression.ipynb
|
###Markdown
Non Linear Regression Analysis ObjectivesAfter completing this lab you will be able to:* Differentiate between linear and non-linear regression* Use non-linear regression model in Python If the data shows a curvy trend, then linear regression will not produce very accurate results when compared to a non-linear regression since linear regression presumes that the data is linear.Let's learn about non linear regressions and apply an example in python. In this notebook, we fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014. Importing required libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Although linear regression can do a great job at modeling some datasets, it cannot be used for all datasets. First recall how linear regression, models a dataset. It models the linear relationship between a dependent variable y and the independent variables x. It has a simple equation, of degree 1, for example y = $2x$ + 3.
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 2*(x) + 3
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
#plt.figure(figsize=(8,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Non-linear regression is a method to model the non-linear relationship between the independent variables $x$ and the dependent variable $y$. Essentially any relationship that is not linear can be termed as non-linear, and is usually represented by the polynomial of $k$ degrees (maximum power of $x$). For example:$$ \ y = a x^3 + b x^2 + c x + d \ $$Non-linear functions can have elements like exponentials, logarithms, fractions, and so on. For example: $$ y = \log(x)$$We can have a function that's even more complicated such as :$$ y = \log(a x^3 + b x^2 + c x + d)$$ Let's take a look at a cubic function's graph.
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 1*(x**3) + 1*(x**2) + 1*x + 3
y_noise = 20 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, this function has $x^3$ and $x^2$ as independent variables. Also, the graphic of this function is not a straight line over the 2D plane. So this is a non-linear function. Some other types of non-linear functions are: Quadratic $$ Y = X^2 $$
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = np.power(x,2)
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Exponential An exponential function with base c is defined by $$ Y = a + b c^X$$ where b ≠0, c > 0 , c ≠ 1, and x is any real number. The base, c, is constant and the exponent, x, is a variable.
###Code
X = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
Y= np.exp(X)
y_noise = 5*np.random.normal(size=x.size)
y_data = y+y_noise
plt.plot(x, y_data,'bo')
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
LogarithmicThe response $y$ is a results of applying the logarithmic map from the input $x$ to the output $y$. It is one of the simplest form of **log()**: i.e. $$ y = \log(x)$$Please consider that instead of $x$, we can use $X$, which can be a polynomial representation of the $x$ values. In general form it would be written as\\begin{equation}y = \log(X)\end{equation}
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = np.log(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
###Output
C:\Users\Spyx\Anaconda3\lib\site-packages\ipykernel_launcher.py:3: RuntimeWarning: invalid value encountered in log
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Sigmoidal/Logistic $$ Y = a + \frac{b}{1+ c^{(X-d)}}$$
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = 1-4/(1+np.power(3, X-2))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Non-Linear Regression example For an example, we're going to try and fit a non-linear model to the datapoints corresponding to China's GDP from 1960 to 2014. We download a dataset with two columns, the first, a year between 1960 and 2014, the second, China's corresponding annual gross domestic income in US dollars for that year.
###Code
import numpy as np
import pandas as pd
df = pd.read_csv("china_gdp.csv")
df.head(10)
###Output
_____no_output_____
###Markdown
Plotting the DatasetThis is what the datapoints look like. It kind of looks like an either logistic or exponential function. The growth starts off slow, then from 2005 on forward, the growth is very significant. And finally, it decelerate slightly in the 2010s.
###Code
plt.figure(figsize=(8,5))
x_data, y_data = (df["Year"].values, df["Value"].values)
plt.plot(x_data, y_data, 'ro')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
Choosing a modelFrom an initial look at the plot, we determine that the logistic function could be a good approximation,since it has the property of starting with a slow growth, increasing growth in the middle, and then decreasing again at the end; as illustrated below:
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = 1.0 / (1.0 + np.exp(-X))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
The formula for the logistic function is the following:$$ \hat{Y} = \frac1{1+e^{\beta\_1(X-\beta\_2)}}$$$\beta\_1$: Controls the curve's steepness,$\beta\_2$: Slides the curve on the x-axis. Building The ModelNow, let's build our regression model and initialize its parameters.
###Code
def sigmoid(x, Beta_1, Beta_2):
y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
return y
###Output
_____no_output_____
###Markdown
Lets look at a sample sigmoid line that might fit with the data:
###Code
beta_1 = 0.10
beta_2 = 1990.0
#logistic function
Y_pred = sigmoid(x_data, beta_1 , beta_2)
#plot initial prediction against datapoints
plt.plot(x_data, Y_pred*15000000000000.)
plt.plot(x_data, y_data, 'ro')
###Output
_____no_output_____
###Markdown
Our task here is to find the best parameters for our model. Lets first normalize our x and y:
###Code
# Lets normalize our data
xdata =x_data/max(x_data)
ydata =y_data/max(y_data)
###Output
_____no_output_____
###Markdown
How we find the best parameters for our fit line?we can use **curve_fit** which uses non-linear least squares to fit our sigmoid function, to data. Optimal values for the parameters so that the sum of the squared residuals of sigmoid(xdata, \*popt) - ydata is minimized.popt are our optimized parameters.
###Code
from scipy.optimize import curve_fit
popt, pcov = curve_fit(sigmoid, xdata, ydata)
#print the final parameters
print(" beta_1 = %f, beta_2 = %f" % (popt[0], popt[1]))
###Output
beta_1 = 690.451711, beta_2 = 0.997207
###Markdown
Now we plot our resulting regression model.
###Code
x = np.linspace(1960, 2015, 55)
x = x/max(x)
plt.figure(figsize=(8,5))
y = sigmoid(x, *popt)
plt.plot(xdata, ydata, 'ro', label='data')
plt.plot(x,y, linewidth=3.0, label='fit')
plt.legend(loc='best')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
PracticeCan you calculate what is the accuracy of our model?
###Code
msk = np.random.rand(len(df)) < 0.8
train_x = xdata[msk]
test_x = xdata[~msk]
train_y = ydata[msk]
test_y = ydata[~msk]
# build the model using train set
popt, pcov = curve_fit(sigmoid, train_x, train_y)
# predict using test set
y_hat = sigmoid(test_x, *popt)
# evaluation
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
###Output
Mean absolute error: 0.03
Residual sum of squares (MSE): 0.00
R2-score: 0.95
###Markdown
Non Linear Regression Analysis In this notebook, we fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014. Importing required libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Though Linear regression is very good to solve many problems, it cannot be used for all datasets. First recall how linear regression, could model a dataset. It models a linear relation between a dependent variable y and independent variable x. It had a simple equation, of degree 1, for example y = 2*(x) + 5.
###Code
x = np.arange(-10.0, 10.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 2*(x) + 5
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.figure(figsize=(10,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Non-linear regressions are a relationship between independent variables $x$ and a dependent variable $y$ which result in a non-linear function modeled data. Essentially any relationship that is not linear can be termed as non-linear, and is usually represented by the polynomial of $k$ degrees (maximum power of $x$). $$ \ y = a x^3 + b x^2 + c x + d \ $$Non-linear functions can have elements like exponentials, logarithms, fractions, and others. For example: $$ y = \log(x)$$ Or even, more complicated such as :$$ y = \log(a x^3 + b x^2 + c x + d)$$ Let's take a look at a cubic function's graph.
###Code
x = np.arange(-10.0, 10.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 1*(x**3) + 1*(x**2) + 2*x + 5
y_noise = 20 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.figure(figsize=(10,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, this function has $x^3$ and $x^2$ as independent variables. Also, the graphic of this function is not a straight line over the 2D plane. So this is a non-linear function. Some other types of non-linear functions are: Quadratic $$ Y = X^2 $$
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = np.power(x,2)
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.figure(figsize=(10,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Exponential An exponential function with base c is defined by $$ Y = a + b c^X$$ where b ≠0, c > 0 , c ≠1, and x is any real number. The base, c, is constant and the exponent, x, is a variable.
###Code
X = np.arange(-10.0, 10.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
Y= np.exp(X)
plt.figure(figsize=(10,6))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
LogarithmicThe response $y$ is a results of applying logarithmic map from input $x$'s to output variable $y$. It is one of the simplest form of __log()__: i.e. $$ y = \log(x)$$Please consider that instead of $x$, we can use $X$, which can be polynomial representation of the $x$'s. In general form it would be written as \begin{equation}y = \log(X)\end{equation}
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = np.log(X)
plt.figure(figsize=(10,6))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
C:\Anaconda3\lib\site-packages\ipykernel_launcher.py:3: RuntimeWarning: invalid value encountered in log
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Sigmoidal/Logistic $$ Y = a + \frac{b}{1+ c^{(X-d)}}$$
###Code
X = np.arange(-15.0, 15.0, 0.1)
Y = 1-4/(1+np.power(3, X-2))
plt.figure(figsize=(10,6))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Non-Linear Regression example
###Code
df = pd.read_csv("data/china_gdp.csv")
df.head(10)
df.tail()
###Output
_____no_output_____
###Markdown
Plotting the Dataset
###Code
plt.figure(figsize=(10,6))
x_data, y_data = (df["Year"].values, df["Value"].values)
plt.plot(x_data, y_data, 'ro')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
Choosing a model From an initial look at the plot, we determine that the logistic function could be a good approximation,since it has the property of starting with a slow growth, increasing growth in the middle, and then decreasing again at the end; as illustrated below:
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = 1.0 / (1.0 + np.exp(-X))
plt.figure(figsize=(10,6))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
The formula for the logistic function is the following:$$ \hat{Y} = \frac1{1+e^{\beta_1(X-\beta_2)}}$$$\beta_1$: Controls the curve's steepness,$\beta_2$: Slides the curve on the x-axis. Building The Model Now, let's build our regression model and initialize its parameters.
###Code
def sigmoid(x, Beta_1, Beta_2):
y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
return y
###Output
_____no_output_____
###Markdown
Lets look at a sample sigmoid line that might fit with the data:
###Code
beta_1 = 0.10
beta_2 = 1990.0
#logistic function
Y_pred = sigmoid(x_data, beta_1 , beta_2)
plt.figure(figsize=(10,6))
#plot initial prediction against datapoints
plt.plot(x_data, Y_pred*15000000000000.)
plt.plot(x_data, y_data, 'ro')
plt.show()
###Output
_____no_output_____
###Markdown
Our task here is to find the best parameters for our model. Lets first normalize our x and y:
###Code
# Lets normalize our data
xdata =x_data/max(x_data)
ydata =y_data/max(y_data)
###Output
_____no_output_____
###Markdown
How we find the best parameters for our fit line?we can use __curve_fit__ which uses non-linear least squares to fit our sigmoid function, to data. Optimal values for the parameters so that the sum of the squared residuals of sigmoid(xdata, *popt) - ydata is minimized.popt are our optimized parameters.
###Code
from scipy.optimize import curve_fit
popt, pcov = curve_fit(sigmoid, xdata, ydata)
#print the final parameters
print(" beta_1 = %f, beta_2 = %f" % (popt[0], popt[1]))
###Output
beta_1 = 690.451711, beta_2 = 0.997207
###Markdown
Now we plot our resulting regresssion model.
###Code
x = np.linspace(1960, 2015, 55)
x = x/max(x)
plt.figure(figsize=(8,5))
y = sigmoid(x, *popt)
plt.figure(figsize=(10,6))
plt.plot(xdata, ydata, 'ro', label='data')
plt.plot(x,y, linewidth=3.0, label='fit')
plt.legend(loc='best')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
split data into train/test
###Code
msk = np.random.rand(len(df)) < 0.8
train_x = xdata[msk]
test_x = xdata[~msk]
train_y = ydata[msk]
test_y = ydata[~msk]
###Output
_____no_output_____
###Markdown
build the model using train set
###Code
popt, pcov = curve_fit(sigmoid, train_x, train_y)
###Output
_____no_output_____
###Markdown
predict using test set
###Code
y_hat = sigmoid(test_x, *popt)
###Output
_____no_output_____
###Markdown
evaluation
###Code
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
###Output
Mean absolute error: 0.03
Residual sum of squares (MSE): 0.00
R2-score: 0.97
###Markdown
Non Linear Regression Analysis If the data shows a curvy trend, then linear regression will not produce very accurate results when compared to a non-linear regression because, as the name implies, linear regression presumes that the data is linear. Let's learn about non linear regressions and apply an example on python. In this notebook, we fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014. Importing required libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Though Linear regression is very good to solve many problems, it cannot be used for all datasets. First recall how linear regression, could model a dataset. It models a linear relation between a dependent variable y and independent variable x. It had a simple equation, of degree 1, for example y = $2x$ + 3.
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 2*(x) + 3
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
#plt.figure(figsize=(8,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Non-linear regressions are a relationship between independent variables $x$ and a dependent variable $y$ which result in a non-linear function modeled data. Essentially any relationship that is not linear can be termed as non-linear, and is usually represented by the polynomial of $k$ degrees (maximum power of $x$). $$ \ y = a x^3 + b x^2 + c x + d \ $$Non-linear functions can have elements like exponentials, logarithms, fractions, and others. For example: $$ y = \log(x)$$ Or even, more complicated such as :$$ y = \log(a x^3 + b x^2 + c x + d)$$ Let's take a look at a cubic function's graph.
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 1*(x**3) + 1*(x**2) + 1*x + 3
y_noise = 20 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, this function has $x^3$ and $x^2$ as independent variables. Also, the graphic of this function is not a straight line over the 2D plane. So this is a non-linear function. Some other types of non-linear functions are: Quadratic $$ Y = X^2 $$
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = np.power(x,2)
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Exponential An exponential function with base c is defined by $$ Y = a + b c^X$$ where b ≠0, c > 0 , c ≠1, and x is any real number. The base, c, is constant and the exponent, x, is a variable.
###Code
X = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
Y= np.exp(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
LogarithmicThe response $y$ is a results of applying logarithmic map from input $x$'s to output variable $y$. It is one of the simplest form of __log()__: i.e. $$ y = \log(x)$$Please consider that instead of $x$, we can use $X$, which can be polynomial representation of the $x$'s. In general form it would be written as \begin{equation}y = \log(X)\end{equation}
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = np.log(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
/home/jupyterlab/conda/envs/python/lib/python3.6/site-packages/ipykernel_launcher.py:3: RuntimeWarning: invalid value encountered in log
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Sigmoidal/Logistic $$ Y = a + \frac{b}{1+ c^{(X-d)}}$$
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = 1-4/(1+np.power(3, X-2))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Non-Linear Regression example For an example, we're going to try and fit a non-linear model to the datapoints corresponding to China's GDP from 1960 to 2014. We download a dataset with two columns, the first, a year between 1960 and 2014, the second, China's corresponding annual gross domestic income in US dollars for that year.
###Code
import numpy as np
import pandas as pd
#downloading dataset
!wget -nv -O china_gdp.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv
df = pd.read_csv("china_gdp.csv")
df.head(10)
###Output
2020-01-06 14:58:35 URL:https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv [1218/1218] -> "china_gdp.csv" [1]
###Markdown
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) Plotting the Dataset This is what the datapoints look like. It kind of looks like an either logistic or exponential function. The growth starts off slow, then from 2005 on forward, the growth is very significant. And finally, it decelerate slightly in the 2010s.
###Code
plt.figure(figsize=(8,5))
x_data, y_data = (df["Year"].values, df["Value"].values)
plt.plot(x_data, y_data, 'ro')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
Choosing a model From an initial look at the plot, we determine that the logistic function could be a good approximation,since it has the property of starting with a slow growth, increasing growth in the middle, and then decreasing again at the end; as illustrated below:
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = 1.0 / (1.0 + np.exp(-X))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
The formula for the logistic function is the following:$$ \hat{Y} = \frac1{1+e^{\beta_1(X-\beta_2)}}$$$\beta_1$: Controls the curve's steepness,$\beta_2$: Slides the curve on the x-axis. Building The Model Now, let's build our regression model and initialize its parameters.
###Code
def sigmoid(x, Beta_1, Beta_2):
y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
return y
###Output
_____no_output_____
###Markdown
Lets look at a sample sigmoid line that might fit with the data:
###Code
beta_1 = 0.10
beta_2 = 1990.0
#logistic function
Y_pred = sigmoid(x_data, beta_1 , beta_2)
#plot initial prediction against datapoints
plt.plot(x_data, Y_pred*15000000000000.)
plt.plot(x_data, y_data, 'ro')
###Output
_____no_output_____
###Markdown
Our task here is to find the best parameters for our model. Lets first normalize our x and y:
###Code
# Lets normalize our data
xdata =x_data/max(x_data)
ydata =y_data/max(y_data)
###Output
_____no_output_____
###Markdown
How we find the best parameters for our fit line?we can use __curve_fit__ which uses non-linear least squares to fit our sigmoid function, to data. Optimal values for the parameters so that the sum of the squared residuals of sigmoid(xdata, *popt) - ydata is minimized.popt are our optimized parameters.
###Code
from scipy.optimize import curve_fit
popt, pcov = curve_fit(sigmoid, xdata, ydata)
#print the final parameters
print(" beta_1 = %f, beta_2 = %f" % (popt[0], popt[1]))
###Output
beta_1 = 690.447527, beta_2 = 0.997207
###Markdown
Now we plot our resulting regression model.
###Code
x = np.linspace(1960, 2015, 55)
x = x/max(x)
plt.figure(figsize=(8,5))
y = sigmoid(x, *popt)
plt.plot(xdata, ydata, 'ro', label='data')
plt.plot(x,y, linewidth=3.0, label='fit')
plt.legend(loc='best')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
PracticeCan you calculate what is the accuracy of our model?
###Code
# write your code here
from sklearn.metrics import r2_score
#test_x_poly = poly.fit_transform(test_x)
#test_y_ = clf.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(y - ydata)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y - ydata) ** 2))
print("R2-score: %.2f" % r2_score(y , ydata) )
print(y)
print()
print(ydata)
###Output
Mean absolute error: 0.03
Residual sum of squares (MSE): 0.00
R2-score: 0.97
[4.49539475e-08 6.37288816e-08 9.03451318e-08 1.28077609e-07
1.81568982e-07 2.57400920e-07 3.64903912e-07 5.17305295e-07
7.33356762e-07 1.03964157e-06 1.47384537e-06 2.08939294e-06
2.96202156e-06 4.19909934e-06 5.95283518e-06 8.43900509e-06
1.19634982e-05 1.69599470e-05 2.40430680e-05 3.40842698e-05
4.83188158e-05 6.84977000e-05 9.71028632e-05 1.37652117e-04
1.95131057e-04 2.76604650e-04 3.92082789e-04 5.55744456e-04
7.87667305e-04 1.11626792e-03 1.58173779e-03 2.24086739e-03
3.17379129e-03 4.49336343e-03 6.35807672e-03 8.98964456e-03
1.26964844e-02 1.79042082e-02 2.51934802e-02 3.53436061e-02
4.93759515e-02 6.85834169e-02 9.45197580e-02 1.28907030e-01
1.73408869e-01 2.29230970e-01 2.96575368e-01 3.74101736e-01
4.58679266e-01 5.45706679e-01 6.30028212e-01 7.07099234e-01
7.73877455e-01 8.29110423e-01 8.73065020e-01]
[0.0057156 0.00478589 0.00450854 0.00483806 0.00570384 0.00673204
0.00732793 0.00695878 0.0067595 0.00760213 0.00883705 0.00951846
0.01083164 0.01320831 0.01373801 0.01556399 0.01464318 0.01664431
0.01432975 0.01707961 0.01831512 0.01877086 0.01965745 0.02211047
0.02492384 0.02969431 0.02885665 0.02620514 0.03000746 0.03341025
0.03466722 0.03683833 0.04103727 0.04276985 0.0542994 0.07069473
0.08313453 0.09253259 0.09901435 0.10521147 0.11639597 0.12865827
0.1411811 0.15933902 0.18752073 0.21908602 0.26362418 0.34023675
0.44022261 0.48860473 0.58326959 0.7235687 0.81716665 0.91653856
1. ]
###Markdown
Non Linear Regression Analysis If the data shows a curvy trend, then linear regression will not produce very accurate results when compared to a non-linear regression because, as the name implies, linear regression presumes that the data is linear. Let's learn about non linear regressions and apply an example on python. In this notebook, we fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014. Importing required libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Though Linear regression is very good to solve many problems, it cannot be used for all datasets. First recall how linear regression, could model a dataset. It models a linear relation between a dependent variable y and independent variable x. It had a simple equation, of degree 1, for example y = $2x$ + 3.
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 2*(x) + 3
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
#plt.figure(figsize=(8,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Non-linear regressions are a relationship between independent variables $x$ and a dependent variable $y$ which result in a non-linear function modeled data. Essentially any relationship that is not linear can be termed as non-linear, and is usually represented by the polynomial of $k$ degrees (maximum power of $x$). $$ \ y = a x^3 + b x^2 + c x + d \ $$Non-linear functions can have elements like exponentials, logarithms, fractions, and others. For example: $$ y = \log(x)$$ Or even, more complicated such as :$$ y = \log(a x^3 + b x^2 + c x + d)$$ Let's take a look at a cubic function's graph.
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 1*(x**3) + 1*(x**2) + 1*x + 3
y_noise = 20 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, this function has $x^3$ and $x^2$ as independent variables. Also, the graphic of this function is not a straight line over the 2D plane. So this is a non-linear function. Some other types of non-linear functions are: Quadratic $$ Y = X^2 $$
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = np.power(x,2)
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Exponential An exponential function with base c is defined by $$ Y = a + b c^X$$ where b ≠0, c > 0 , c ≠1, and x is any real number. The base, c, is constant and the exponent, x, is a variable.
###Code
X = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
Y= np.exp(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
LogarithmicThe response $y$ is a results of applying logarithmic map from input $x$'s to output variable $y$. It is one of the simplest form of __log()__: i.e. $$ y = \log(x)$$Please consider that instead of $x$, we can use $X$, which can be polynomial representation of the $x$'s. In general form it would be written as \begin{equation}y = \log(X)\end{equation}
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = np.log(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Sigmoidal/Logistic $$ Y = a + \frac{b}{1+ c^{(X-d)}}$$
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = 1-4/(1+np.power(3, X-2))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Non-Linear Regression example For an example, we're going to try and fit a non-linear model to the datapoints corresponding to China's GDP from 1960 to 2014. We download a dataset with two columns, the first, a year between 1960 and 2014, the second, China's corresponding annual gross domestic income in US dollars for that year.
###Code
import numpy as np
import pandas as pd
#downloading dataset
!wget -nv -O china_gdp.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv
df = pd.read_csv("china_gdp.csv")
df.head(10)
###Output
_____no_output_____
###Markdown
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) Plotting the Dataset This is what the datapoints look like. It kind of looks like an either logistic or exponential function. The growth starts off slow, then from 2005 on forward, the growth is very significant. And finally, it decelerate slightly in the 2010s.
###Code
plt.figure(figsize=(8,5))
x_data, y_data = (df["Year"].values, df["Value"].values)
plt.plot(x_data, y_data, 'ro')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
Choosing a model From an initial look at the plot, we determine that the logistic function could be a good approximation,since it has the property of starting with a slow growth, increasing growth in the middle, and then decreasing again at the end; as illustrated below:
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = 1.0 / (1.0 + np.exp(-X))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
The formula for the logistic function is the following:$$ \hat{Y} = \frac1{1+e^{\beta_1(X-\beta_2)}}$$$\beta_1$: Controls the curve's steepness,$\beta_2$: Slides the curve on the x-axis. Building The Model Now, let's build our regression model and initialize its parameters.
###Code
def sigmoid(x, Beta_1, Beta_2):
y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
return y
###Output
_____no_output_____
###Markdown
Lets look at a sample sigmoid line that might fit with the data:
###Code
beta_1 = 0.10
beta_2 = 1990.0
#logistic function
Y_pred = sigmoid(x_data, beta_1 , beta_2)
#plot initial prediction against datapoints
plt.plot(x_data, Y_pred*15000000000000.)
plt.plot(x_data, y_data, 'ro')
###Output
_____no_output_____
###Markdown
Our task here is to find the best parameters for our model. Lets first normalize our x and y:
###Code
# Lets normalize our data
xdata =x_data/max(x_data)
ydata =y_data/max(y_data)
###Output
_____no_output_____
###Markdown
How we find the best parameters for our fit line?we can use __curve_fit__ which uses non-linear least squares to fit our sigmoid function, to data. Optimal values for the parameters so that the sum of the squared residuals of sigmoid(xdata, *popt) - ydata is minimized.popt are our optimized parameters.
###Code
from scipy.optimize import curve_fit
popt, pcov = curve_fit(sigmoid, xdata, ydata)
#print the final parameters
print(" beta_1 = %f, beta_2 = %f" % (popt[0], popt[1]))
###Output
_____no_output_____
###Markdown
Now we plot our resulting regression model.
###Code
x = np.linspace(1960, 2015, 55)
x = x/max(x)
plt.figure(figsize=(8,5))
y = sigmoid(x, *popt)
plt.plot(xdata, ydata, 'ro', label='data')
plt.plot(x,y, linewidth=3.0, label='fit')
plt.legend(loc='best')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
PracticeCan you calculate what is the accuracy of our model?
###Code
# write your code here
###Output
_____no_output_____
|
docs/python/matplotlib/GGplot.ipynb
|
###Markdown
---title: "GGplot"author: "Aavinash"date: 2020-09-04description: "-"type: technical_notedraft: false---
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import style
style.use('ggplot')
fig = plt.figure()
ax1 = fig.add_subplot(111, projection='3d')
x3 = [1,2,3,4,5,6,7,8,9,10]
y3 = [5,6,7,8,2,5,6,3,7,2]
z3 = np.zeros(10)
dx = np.ones(10)
dy = np.ones(10)
dz = [1,2,3,4,5,6,7,8,9,10]
ax1.bar3d(x3, y3, z3, dx, dy, dz)
ax1.set_xlabel('x axis')
ax1.set_ylabel('y axis')
ax1.set_zlabel('z axis')
plt.show()
fig = plt.figure()
ax1 = fig.add_subplot(111, projection='3d')
x, y, z = axes3d.get_test_data()
ax1.plot_wireframe(x,y,z, rstride = 3, cstride = 3)
ax1.set_xlabel('x axis')
ax1.set_ylabel('y axis')
ax1.set_zlabel('z axis')
plt.show()
###Output
_____no_output_____
|
notebooks/solutions/dataaccess_solutions.ipynb
|
###Markdown
 Introduction to MetPy Prerequisite Lessons Foundations in Data Access Activity NotebookHow to use this Notebook:This notebook pairs with the Foundations in Data Access lesson. Follow along with the instructions presented in the lesson, then return to this notebook when prompted. After an activity, you will be prompted to return to the lesson to proceed. Activity 0: Import required packages
###Code
## CELL 0A
## INSTRUCTIONS: Run this cell
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
# Here is where we import the TDSCatalog class from Siphon for obtaining our data
from siphon.catalog import TDSCatalog
###Output
_____no_output_____
###Markdown
Activity 1: Getting started with the THREDDS Data ServerWe can easily view a THREDDS Data Server (TDS) Catalog in a browser. For this activity, we'll examine Unidata's TDS catalog of case studies. https://thredds.ucar.edu/thredds/casestudies/catalog.html
###Code
## CELL 1A
## INSTRUCTIONS: Open the TDS link above in a new tab in your browser.
## Then browse the folders to find catalog URL to:
## Hurricane Harvey GOES-16 imagery
## Mesoscale-1 extent
## Channel 02
## on August 26, 2017
# Paste the URL here as a string:
url = "https://thredds.ucar.edu/thredds/catalog/casestudies/harvey/goes16/Mesoscale-1/Channel02/20170826/catalog.html"
# Change the URL above to be an xml document using Python's built-in replace module
xmlurl = url.replace(".html", ".xml")
print(xmlurl)
###Output
https://thredds.ucar.edu/thredds/catalog/casestudies/harvey/goes16/Mesoscale-1/Channel02/20170826/catalog.xml
###Markdown
Now we have the catalog located, it's time to create and examine the TDSCatalog object
###Code
## CELL 1B
## INSTRUCTIONS: Run this cell
# Create the TDS Catalog object, satcat
satcat = TDSCatalog(xmlurl)
# The catalog itself isn't very useful to us.
# What `is` useful is the datasets property of the object.
# There are a LOT of items in the datasets property, so
# let's just examine the first 10.
satcat.datasets[0:10]
###Output
_____no_output_____
###Markdown
The `datasets` property of the `satcat` object shows us a list of the .nc4 files that contain the data we'll use.
###Code
## CELL 1C
## INSTRUCTIONS: Determine how many total items are in satcat.datasets
# Type your code below:
#answer:
len(satcat.datasets)
###Output
_____no_output_____
###Markdown
We now have a list of all files available in the catalog, but the data are not yet pulled into memory for visualization or analysis. For this, we need to use the `remote_access()` method from Siphon.
###Code
## CELL 1D
## INSTRUCTIONS: Run this cell
# We will arbitrarily choose the 1000th file in the list to explore
# In the next section, we will discuss the use of xarray here
satdata = satcat.datasets[1000].remote_access(use_xarray=True)
# Print the type of object that satdata is
type(satdata)
###Output
_____no_output_____
###Markdown
Now we have an xarray `Dataset` that we can work with. However, we have not yet pulled back the layers enough to expose a single array we can visualize or do analysis with. To do any further work, we'll need to parse not only the data, but the metadata as well. In the next section, we'll explore this type of multi-dimensional dataset. When the above activity is complete, save this notebook and return to the course tab Activity 2: Explore Multi-dimensional data structures xarray HTML formatted summary toolXarray has an HTML-formatted interactive summary tool for examing datasets. Simply execute the variable name to create the summary.
###Code
## CELL 2A
## INSTRUCTIONS: Run this cell to create a formatted exploration tool for the xarray dataset
satdata
###Output
_____no_output_____
###Markdown
We now see an interactive summary of the dimensions, coordinates, variables, attributes for the dataset. This information can help with plotting, analysis, and generally understanding the data you are working with. Answer the questions below given the information in the HTML formatted summary table above.
###Code
## CELL 2B
## INSTRUCTIONS: Find the following information about the dataset:
# 1. The title, or full description of the dataset
# answer: Sectorized Cloud and Moisture Imagery for the TMESO region.
# 2. The name of the variable that contains the satellite imagery
# answer: Sectorized_CMI
# 3. The coordinate system the data were collected in
# answer: Lambert Conformal
# 4. The size of the array (# cells in x and y)
# answer: x=2184 y=2468
# 5. The metadata conventions the dataset uses
# answer: CF-1.6
###Output
_____no_output_____
###Markdown
More Info You may see the CF (Climate and Forecasting) metadata conventions in many popular atmospheric datasets. These conventions provide standardized variable names and units and recommendations on metadata such as projection information and coordinate information. You can read more about CF conventions here: https://cfconventions.org/ Get the data arrayThere are several ways to extract the array containing the satellite imagery from the xarray dataset depending on your specific use case. The method we'll use in this example uses MetPy and the parse_cf() method.
###Code
## CELL 2C
## INSTRUCTIONS: set `var` as the name of the data variable from number 2 above as a string
var = "Sectorized_CMI"
# import metpy
import metpy
# extract the data array from the xarray dataset
satarray = satdata.metpy.parse_cf(var)
type(satarray)
###Output
_____no_output_____
###Markdown
Plot on projected axes with CartopyNow we have an array that we can do analysis with or plot. Let's now pull the projection information from the dataset and plot it on projected axes with Cartopy.
###Code
## CELL 2D
## INSTRUCTIONS: Set the projection for the data array
# given the information in the satdata object
# Use the Cartopy documentation for syntax
# https://scitools.org.uk/cartopy/docs/latest/crs/projections.html
# Or refer to the Foundations in Cartopy lesson
# Set the projection of the data
proj = ccrs.LambertConformal()
# Plot the data
fig = plt.figure()
ax = fig.add_subplot(projection=proj)
ax.imshow(satarray,transform=proj)
###Output
_____no_output_____
###Markdown
 Introduction to MetPy Prerequisite Lessons Foundations in Data Access Activity NotebookHow to use this Notebook:This notebook pairs with the Foundations in Data Access lesson. Follow along with the instructions presented in the lesson, then return to this notebook when prompted. After an activity, you will be prompted to return to the lesson to proceed. Activity 0: Import required packages
###Code
## CELL 0A
## INSTRUCTIONS: Run this cell
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
# Here is where we import the TDSCatalog class from Siphon for obtaining our data
from siphon.catalog import TDSCatalog
###Output
_____no_output_____
###Markdown
Activity 1: Getting started with the THREDDS Data ServerWe can easily view a THREDDS Data Server (TDS) Catalog in a browser. For this activity, we'll examine Unidata's TDS catalog of case studies. https://thredds.ucar.edu/thredds/casestudies/catalog.html
###Code
## CELL 1A
## INSTRUCTIONS: Open the TDS link above in a new tab in your browser.
## Then browse the folders to find catalog URL to:
## Hurricane Harvey GOES-16 imagery
## Mesoscale-1 extent
## Channel 02
## on August 26, 2017
# Paste the URL here as a string:
url = "https://thredds.ucar.edu/thredds/catalog/casestudies/harvey/goes16/Mesoscale-1/Channel02/20170826/catalog.html"
# Change the URL above to be an xml document using Python's built-in replace module
xmlurl = url.replace(".html", ".xml")
print(xmlurl)
###Output
https://thredds.ucar.edu/thredds/catalog/casestudies/harvey/goes16/Mesoscale-1/Channel02/20170826/catalog.xml
###Markdown
Now we have the catalog located, it's time to create and examine the TDSCatalog object
###Code
## CELL 1B
## INSTRUCTIONS: Run this cell
# Create the TDS Catalog object, satcat
satcat = TDSCatalog(xmlurl)
# The catalog itself isn't very useful to us.
# What `is` useful is the datasets property of the object.
# There are a LOT of items in the datasets property, so
# let's just examine the first 10.
satcat.datasets[0:10]
###Output
_____no_output_____
###Markdown
The `datasets` property of the `satcat` object shows us a list of the .nc4 files that contain the data we'll use.
###Code
## CELL 1C
## INSTRUCTIONS: Determine how many total items are in satcat.datasets
# Type your code below:
#answer:
len(satcat.datasets)
###Output
_____no_output_____
###Markdown
We now have a list of all files available in the catalog, but the data are not yet pulled into memory for visualization or analysis. For this, we need to use the `remote_access()` method from Siphon.
###Code
## CELL 1D
## INSTRUCTIONS: Run this cell
# We will arbitrarily choose the 1000th file in the list to explore
# In the next section, we will discuss the use of xarray here
satdata = satcat.datasets[1000].remote_access(use_xarray=True)
# Print the type of object that satdata is
type(satdata)
###Output
_____no_output_____
###Markdown
Now we have an xarray `Dataset` that we can work with. However, we have not yet pulled back the layers enough to expose a single array we can visualize or do analysis with. To do any further work, we'll need to parse not only the data, but the metadata as well. In the next section, we'll explore this type of multi-dimensional dataset. When the above activity is complete, save this notebook and return to the course tab Activity 2: Explore Multi-dimensional data structures xarray HTML formatted summary toolXarray has an HTML-formatted interactive summary tool for examing datasets. Simply execute the variable name to create the summary.
###Code
## CELL 2A
## INSTRUCTIONS: Run this cell to create a formatted exploration tool for the xarray dataset
satdata
###Output
_____no_output_____
###Markdown
We now see an interactive summary of the dimensions, coordinates, variables, attributes for the dataset. This information can help with plotting, analysis, and generally understanding the data you are working with. Answer the questions below given the information in the HTML formatted summary table above.
###Code
## CELL 2B
## INSTRUCTIONS: Find the following information about the dataset:
# 1. The title, or full description of the dataset
# answer: Sectorized Cloud and Moisture Imagery for the TMESO region.
# 2. The name of the variable that contains the satellite imagery
# answer: Sectorized_CMI
# 3. The coordinate system the data were collected in
# answer: Lambert Conformal
# 4. The size of the array (# cells in x and y)
# answer: x=2184 y=2468
# 5. The metadata conventions the dataset uses
# answer: CF-1.6
###Output
_____no_output_____
###Markdown
More Info You may see the CF (Climate and Forecasting) metadata conventions in many popular atmospheric datasets. These conventions provide standardized variable names and units and recommendations on metadata such as projection information and coordinate information. You can read more about CF conventions here: https://cfconventions.org/ Get the data arrayThere are several ways to extract the array containing the satellite imagery from the xarray dataset depending on your specific use case. The method we'll use in this example uses MetPy and the parse_cf() method.
###Code
## CELL 2C
## INSTRUCTIONS: set `var` as the name of the data variable from number 2 above as a string
var = "Sectorized_CMI"
# import metpy
import metpy
# extract the data array from the xarray dataset
satarray = satdata.metpy.parse_cf(var)
type(satarray)
###Output
_____no_output_____
###Markdown
Plot on projected axes with CartopyNow we have an array that we can do analysis with or plot. Let's now pull the projection information from the dataset and plot it on projected axes with Cartopy.
###Code
## CELL 2D
## INSTRUCTIONS: Set the projection for the data array
# given the information in the satdata object
# Use the Cartopy documentation for syntax
# https://scitools.org.uk/cartopy/docs/latest/crs/projections.html
# Or refer to the Foundations in Cartopy lesson
# Set the projection of the data
proj = ccrs.LambertConformal()
# Plot the data
fig = plt.figure()
ax = fig.add_subplot(projection=proj)
ax.imshow(satarray,transform=proj)
###Output
_____no_output_____
|
.ipynb_checkpoints/Census2000vs1900-checkpoint.ipynb
|
###Markdown
Assignment 1: Visualization DesignIn this assignment, you will design a visualization for a small data set and provide a rigorous rationale for your design choices. You should in theory be ready to explain the contribution of every pixel in the display. You are free to use any graphics or charting tool you please - including drafting it by hand. However, you may find it most instructive to create the chart from scratch using a graphics API of your choice. Your task is to design a static (i.e., single image) visualization that you believe effectively communicates the data and provide a short write-up (no more than 4 paragraphs) describing your design. Start by choosing a question you'd like your visualization to answer. Design your visualization to answer that question, and use the question as the title of your graphic.While you must use the data set given, note that you are free to transform the data as you see fit. Such transforms may include (but are not limited to) log transformation, computing percentages or averages, grouping elements into new categories, or removing unnecessary variables or records. You are also free to incorporate external data as you see fit. Your chart image should be interpretable without recourse to your short write-up. Do not forget to include title, axis labels or legends as needed!As different visualizations can emphasize different aspects of a data set, you should document what aspects of the data you are attempting to most effectively communicate. In short, what story are you trying to tell? Just as important, also note which aspects of the data might be obscured or down-played due to your visualization design.In your write-up, you should provide a rigorous rationale for your design decisions. Document the visual encodings you used and why they are appropriate for the data and your specific question. These decisions include the choice of visualization type, size, color, scale, and other visual elements, as well as the use of sorting or other data transformations. How do these decisions facilitate effective communication? Lets start by exploring the data
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# define some colors based on Tableaus color scheme
blue = (0 , 107/256, 164/256)
grey = [x/256 for x in (171, 171, 171)]
black = [x/256 for x in (89, 89, 89)]
###Output
_____no_output_____
###Markdown
Read in the data
###Code
data = pd.read_csv('census2000.csv')
data[:5]
tab1900 = data[data['Year'] == 1900]
tab2000 = data[data['Year'] == 2000]
tab1900_men = tab1900[tab1900['Sex'] == 1]
tab1900_women = tab1900[tab1900['Sex'] == 2]
tab2000_men = tab2000[tab2000['Sex'] == 1]
tab2000_women = tab2000[tab2000['Sex'] == 2]
###Output
_____no_output_____
###Markdown
Looking at the raw data, the number of men and women as a function of age in each century
###Code
plt.step(tab1900_men['Age'].values, tab1900_men['People'].values, label='men 1900')
plt.step(tab1900_women['Age'].values, tab1900_women['People'].values, label='women 1900')
plt.step(tab2000_men['Age'].values, tab2000_men['People'].values, label='men 2000')
plt.step(tab2000_women['Age'].values, tab2000_women['People'].values, label='women 2000')
plt.xlabel('Age (years)')
plt.ylabel('Number of People')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Lets look at the ratio of men to women to normalize against population growth
###Code
age = tab1900_men['Age'].values
p1m = tab1900_men['People'].values
p1w = tab1900_women['People'].values
p2m = tab2000_men['People'].values
p2w = tab2000_women['People'].values
m2w1900 = p1m/p1w
m2w2000 = p2m/p2w
def make_pretty():
ax = fig.gca()
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
# Ticks on the right and top of the plot are generally unnecessary chartjunk.
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
# Remove the tick marks; they are unnecessary with the tick lines we just plotted.
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
plt.grid(which='major', axis='both', linestyle='dotted', alpha=0.5)
import matplotlib as mpl
#mpl.style.use('fivethirtyeight')
fig = plt.figure(figsize=(7,7/1.33))
plt.title("How does the fraction of men to women at each age group change across a century?", fontsize=12)
plt.step(age, m2w1900, color=grey, label='fraction of men to women in 1900')
plt.step(age, m2w2000, color=black, label='fraction of men to women in 2000')
make_pretty()
plt.xlabel('Age (years)')
plt.ylabel('Fraction of men to women')
plt.xlim(0.01, 90)
plt.legend(loc=(0.1,0.2))
plt.savefig('m2w_fraction.png')
plt.show()
###Output
_____no_output_____
###Markdown
Looking at the data, I felt the most interesting question was the relative number of men to women at each age group and how that has changed over a century. Though, this is rather simplistic, I believe it to be the most interesting feature in the data. Taking the ratio of men to women at each century we can see the answer to the relevent question. The most striking feature is the relative increase in women to men in the modern era. By taking the ratio, we hide the overall population growth, which I felt was obvious and not interesting. This scaling highlights the "bump" in middle aged men in 1900 and the faster decline of men to women in later life. As for the design choices, I chose to use grey scale colors based on Tableaus color blind safe color scheme. Also, in the name of inclusion, I have used sans-serif fonts as these are purported to be better for the dyslexic. I felt the plot boundaries are uneccesary and chose to add a grid to guide the eye in seeing he different age groups. I added the legend to the lower left to avoid the uneccesarry white space. I also cut the limits of the y-axis for the same reason.
###Code
fig = plt.figure(figsize=(7,7/1.33))
plt.title("Relative change in men to women ratio across a century")
plt.step(age, m2w1900/m2w2000)
plt.xlabel('Age (years)')
plt.ylabel('men to women in 1900 / men to women in 2000')
make_pretty()
plt.show()
###Output
_____no_output_____
|
Binary Classification - cleaner version 26022021.ipynb
|
###Markdown
Importing Images
###Code
imageArray=np.full((31988-31768,197,233,189),0)
maskArray=np.full((31988-31768,197,233,189),0)
# concatenate all the files into a single tensor
for i in range(31768,31988):
idx=i
filename='ATLAS_R1.1 - Copy/0'+str(idx)+'_t1w_deface_stx.nii.gz'
img=nb.load(filename)
imageArray[i-31768,:,:,:]=img.get_fdata()
###Output
_____no_output_____
###Markdown
Importing Mask
###Code
for i in range(31768,31988):
mask_name='ATLAS_R1.1 - Copy/0'+str(i)+'_LesionSmooth_stx.nii.gz'
mask=nb.load(mask_name)
maskArray[i-31768,:,:,:]=mask.get_fdata()
###Output
_____no_output_____
###Markdown
Extracting Image and Masks
###Code
lesions= maskArray[:,:,:,:].nonzero()
print(len(lesions[0]),len(lesions[1]),len(lesions[2]),len(lesions[3]))
image_count=imageArray . shape[0]
count_top=imageArray.shape[3]
count_front=imageArray.shape[2]
count_side=imageArray.shape[1]
total_pixels=imageArray.shape[1]*imageArray.shape[2]*imageArray.shape[3]
lesion_pixels=np.zeros((image_count,6))
lesion_area_top=np.zeros((image_count,count_top,7))
lesion_area_front=np.zeros((image_count,count_front,7))
lesion_area_side=np.zeros((image_count,count_side,7))
total_pixels
###Output
_____no_output_____
###Markdown
Lesion Area
###Code
# Calculating Area
for i in range(image_count):
for j in range(count_top):
mask = maskArray [i,:,:,j] .nonzero()
lesion_pix=len(mask[0])
lesion_area_top[i,j,0]= i
lesion_area_top[i,j,1]= lesion_pix
lesion_area_top[i,j,2]= 100* lesion_pix/(count_front*count_side)
lesion_area_top[i,j,3]= 1
lesion_area_top[i,j,4]= j
if lesion_pix != 0 :
lesion_area_top[i,j,5]= np.mean(mask[0])
lesion_area_top[i,j,6]= np.mean(mask[1])
for k in range(count_front):
mask = maskArray [i,:,k,:] .nonzero()
lesion_pix=len(mask[0])
lesion_area_front[i,k,0]= i
lesion_area_front[i,k,1]= lesion_pix
lesion_area_front[i,k,2]= 100* lesion_pix/(count_top*count_side)
lesion_area_front[i,k,3]= 2
lesion_area_front[i,k,4]= k
if lesion_pix != 0 :
lesion_area_front[i,k,5]= np.mean(mask[0])
lesion_area_front[i,k,6]= np.mean(mask[1])
for l in range(count_side):
mask = maskArray [i,l,:,:] .nonzero()
lesion_pix=len(mask[0])
lesion_area_side[i,l,0]= i
lesion_area_side[i,l,1]= lesion_pix
lesion_area_side[i,l,2]= 100* lesion_pix/(count_top*count_front)
lesion_area_side[i,l,3]= 3
lesion_area_side[i,l,4]= l
if lesion_pix != 0 :
lesion_area_side[i,l,5]= np.mean(mask[0])
lesion_area_side[i,l,6]= np.mean(mask[1])
lesion_area_top.shape
lesion_area_top_table=pd.DataFrame(lesion_area_top.reshape\
(image_count*count_top,7),\
columns=["Scan #","Lesion Area(pixels)",\
"Lesion Area(%)",\
"View[top=1, front=2, side=3]",\
"Slice #","Centroid x","Centroid y"])
lesion_area_front_table=pd.DataFrame(lesion_area_front.reshape\
(image_count*count_front,7),\
columns=["Scan #","Lesion Area(pixels)",\
"Lesion Area(%)",\
"View[top=1, front=2, side=3]",\
"Slice #","Centroid x","Centroid y"])
lesion_area_side_table=pd.DataFrame(lesion_area_side.reshape\
(image_count*count_side,7),\
columns=["Scan #","Lesion Area(pixels)",\
"Lesion Area(%)",\
"View[top=1, front=2, side=3]",\
"Slice #","Centroid x","Centroid y"])
lesion_area=lesion_area_top_table.append(lesion_area_front_table\
.append(lesion_area_side_table))
lesion_area_top_table.shape
top= lesion_area_top_table.loc[lesion_area_top_table.iloc[:,2]>0]
front= lesion_area_front_table.loc[lesion_area_front_table.iloc[:,2]>0]
side=lesion_area_side_table.loc[lesion_area_side_table.iloc[:,2]>0]
top.shape
n_bins=20
fig, axs= plt.subplots(1,3,sharey=True, tight_layout=True, figsize=(12,5))
axs[0].hist(top.iloc[:,2],bins=20)
axs[0].set_xlabel('Top(%)')
axs[0].set_ylabel('Frequency')
axs[1].hist(front.iloc[:,2],bins=20)
axs[1].set_xlabel('Front(%)')
axs[2].hist(side.iloc[:,2],bins=20)
axs[2].set_xlabel('Side(%)')
plt.suptitle('Percentage Lesion Area by View for slices with Lesions',y=1.02)
plt.show()
fig = plt.figure(figsize=(12,5))
ax = fig.add_subplot(121, projection='3d',)
matplotlib.rcParams['font.size']=10
x=top.iloc[:,5]
y=top.iloc[:,6]
X,Y=np.meshgrid(x,y)
Z=0
hist, xedges, yedges=np.histogram2d(x,y,bins=20, range=[[20,x.max()],[20,y.max()]])
# Construct arrays for the anchor positions of the 16 bars.
xpos, ypos = np.meshgrid(xedges[:-1] + 10, yedges[:-1] +10, indexing="ij")
xpos = xpos.ravel()
ypos = ypos.ravel()
zpos = 0
# Construct arrays with the dimensions for the 16 bars.
dx = dy = 0.5 * np.ones_like(zpos)
dz = hist.ravel()
ax.bar3d(xpos, ypos, zpos, dx, dy, dz, zsort='average')
#ax.bar3d(X, Y, Z, dx, dy, dz, zsort='average')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('Frequency')
ax.set_title('Centroid Location Top View')
plt.tight_layout()
ax2 = fig.add_subplot(122)
data=imageArray[41,:,:,:]
plt.imshow(data[:,:,data.shape[2]//2], cmap='Greys_r')
ax2.set_xlabel('y')
ax2.set_ylabel('x')
ax2.set_title('Sample Slice')
plt.tight_layout()
fig = plt.figure(figsize=(12,5))
ax = fig.add_subplot(121, projection='3d')
matplotlib.rcParams['font.size']=10
x=front.iloc[:,5]
y=front.iloc[:,6]
X,Y=np.meshgrid(x,y)
Z=0
hist, xedges, yedges=np.histogram2d(x,y,bins=20, range=[[20,x.max()],[20,y.max()]])
# Construct arrays for the anchor positions of the 16 bars.
xpos, ypos = np.meshgrid(xedges[:-1] + 10, yedges[:-1] +10, indexing="ij")
xpos = xpos.ravel()
ypos = ypos.ravel()
zpos = 0
# Construct arrays with the dimensions for the 16 bars.
dx = dy = 0.5 * np.ones_like(zpos)
dz = hist.ravel()
ax.bar3d(xpos, ypos, zpos, dx, dy, dz, zsort='average')
#ax.bar3d(X, Y, Z, dx, dy, dz, zsort='average')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('Frequency')
ax.set_title('Centroid Location Front View')
ax2 = fig.add_subplot(122)
data=imageArray[41,:,:,:]
plt.imshow(data[:,data.shape[2]//2,:], cmap='Greys_r')
ax2.set_title('Sample Slice')
ax2.set_xlabel('y')
ax2.set_ylabel('x')
plt.tight_layout()
fig = plt.figure(figsize=(12,5))
ax = fig.add_subplot(121, projection='3d')
matplotlib.rcParams['font.size']=10
x=side.iloc[:,5]
y=side.iloc[:,6]
X,Y=np.meshgrid(x,y)
Z=0
hist, xedges, yedges=np.histogram2d(x,y,bins=20, range=[[20,x.max()],[20,y.max()]])
# Construct arrays for the anchor positions of the 16 bars.
xpos, ypos = np.meshgrid(xedges[:-1] + 10, yedges[:-1] +10, indexing="ij")
xpos = xpos.ravel()
ypos = ypos.ravel()
zpos = 0
# Construct arrays with the dimensions for the 16 bars.
dx = dy = 0.5 * np.ones_like(zpos)
dz = hist.ravel()
ax.bar3d(xpos, ypos, zpos, dx, dy, dz, zsort='average')
#ax.bar3d(X, Y, Z, dx, dy, dz, zsort='average')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('Frequency')
ax.set_title('Centroid Location Side View')
ax2 = fig.add_subplot(122)
data=imageArray[41,:,:,:]
plt.imshow(data[data.shape[2]//2,:,:], cmap='Greys_r')
ax2.set_title('Sample Slice')
ax2.set_xlabel('y')
ax2.set_ylabel('x')
plt.tight_layout()
hist
lesion_area.to_csv("lesion_area_centroid.csv")
###Output
_____no_output_____
###Markdown
Number of Slices with Lesions
###Code
for i in range(image_count):
mask = maskArray [i,:,:,:] .nonzero()
# Non zero index from different views
top_slices=mask[2]
lesion_slices_top=np.unique(top_slices)
num_lslices_top=len(lesion_slices_top)
per_top=num_lslices_top/maskArray.shape[3] * 100
front_slices=mask[1]
lesion_slices_front=np.unique(front_slices)
num_lslices_front=len(lesion_slices_front)
per_front=num_lslices_front/maskArray.shape[2] * 100
side_slices=mask[0]
lesion_slices_side=np.unique(side_slices)
num_lslices_side=len(lesion_slices_side)
per_side=num_lslices_side/maskArray.shape[1] * 100
lesion_pixels[i,0] = num_lslices_top
lesion_pixels[i,1] = per_top
lesion_pixels[i,2] = num_lslices_front
lesion_pixels[i,3] = per_front
lesion_pixels[i,4] = num_lslices_side
lesion_pixels[i,5] = per_side
lesion_slices_table=pd.DataFrame(lesion_pixels,columns=\
["Top lesion Slices","Percentage with Lesion Top",\
"Front Lesion Slices","Percentage with Lesion Front",\
"Side Lesion Slices", "Percentage with Lesion Side"])
matplotlib.rcParams['font.size']=16
n_bins=20
fig, axs= plt.subplots(1,3,sharey=True, tight_layout=True, figsize=(12,5))
axs[0].hist(lesion_slices_table["Percentage with Lesion Top"])
axs[0].set_xlabel('Top(%)')
axs[0].set_ylabel('Frequency')
axs[1].hist(lesion_slices_table["Percentage with Lesion Front"])
axs[1].set_xlabel('Front(%)')
axs[2].hist(lesion_slices_table["Percentage with Lesion Side"])
axs[2].set_xlabel('Side(%)')
plt.suptitle('Percentage Lesion slices per Scan by View',y=1.02)
plt.show()
lesion_slices_table.to_csv(r'./lesion_slices.csv')
###Output
_____no_output_____
###Markdown
Binary Classification of Lesions The first step to using a 2D classification algorithm on MRI is to extract the scans. The first iteration extracts a single scan with Lesion and another without lesion. How do we know if a lesion image is correct? The lesion scans need to be plotted afterwards.
###Code
lesion_index=np.array([])
lesion_index=np.array(maskArray[1,:,:,:].nonzero())
zero_slices=np.delete(imageArray[1,:,:,:],lesion_index,axis=2)
zero_slices_mask=np.delete(maskArray[1,:,:,:],lesion_index)
zero_slices.shape[2]
plt.imshow(zero_slices[:,:,65],cmap="Greys_r")
# Partial data
image_count=imageArray.shape[0]
lesion_sections=np.zeros((image_count,197,233))
mask_sections=np.zeros((image_count,197,233))
lesion_sections2=np.zeros((image_count,197,233))
mask_sections2=np.zeros((image_count,197,233))
target1=np.zeros((image_count*2))
for i in range(image_count):
# 1. Get 3 tuples with coordinate for each dimension
lesion_index=np.array(maskArray[i,:,:,:].nonzero())
# 2. Get index of slices with lesions along the top view
top_index=np.unique(lesion_index[2])
# 3. Get index of slices without lesions along top view
top_lf_index=np.delete(np.arange(maskArray.shape[3]),top_index,axis=0)
zero_lslices=np.delete(imageArray[i,:,:,:], top_index, axis=2)
# For validation
zero_slices_mask=np.delete(maskArray[i,:,:,:],top_index,axis=2)
# Gives a list of coordinates with each column representing a dimension
zero_index=np.argwhere(maskArray[i,:,:,:]==0)
if lesion_index.size==0:
lesion_sections[i,:,:]=imageArray[i,:,:,imageArray.shape[2]//2]
mask_sections[i,:,:]=maskArray[i,:,:,maskArray.shape[2]//2]
continue
# Taking a median value from the third dimension of Lesion index
lesion_sections[i,:,:]=imageArray[i,:,:,int(np.median(lesion_index[2]))]
mask_sections[i,:,:]=maskArray[i,:,:,int(np.median(lesion_index[2]))]
target1[i]=1
# Median of all values in the 3rd column
lesion_sections2[i,:,:]=zero_lslices[:,:,zero_lslices.shape[2]//2]
mask_sections2[i,:,:]=zero_slices_mask[:,:,zero_slices_mask.shape[2]//2]
scaler=StandardScaler()
# Standardising the scans
lesion_sections[i,:,:]=scaler.fit_transform(lesion_sections[i,:,:])
lesion_sections2[i,:,:]=scaler.fit_transform(lesion_sections2[i,:,:])
lesion_sections.shape
# Checking if lesion actually extracted
plt.subplot(121)
plt.imshow(mask_sections2[20,:,:],cmap="gray")
plt.subplot(122)
plt.imshow(mask_sections[20,:,:],cmap="gray")
plt.show()
# This should be 0 but it's not
print(np.sum(mask_sections2))
a=np.arange(30).reshape(3,5,2)
a
train_x1=lesion_sections.reshape(220,1,197*233)
train_x2=lesion_sections2.reshape(220,1,197*233)
# turn to torch and concatenate
train_x1=torch.from_numpy(train_x1).float()
train_x2=torch.from_numpy(train_x2).float()
train_x=torch.cat([train_x1,train_x2],dim=0)
train_y1=target1.reshape(440,1)
train_y=torch.from_numpy(train_y1).float()
X_train, X_test, y_train, y_test = train_test_split(train_x, train_y, test_size=0.33, random_state=69)
X_train.shape
EPOCHS = 30
BATCH_SIZE = 48
LEARNING_RATE = 0.001
class trainData(Dataset):
def __init__(self, X_data, y_data):
self.X_data = X_data
self.y_data = y_data
def __getitem__(self, index):
return self.X_data[index], self.y_data[index]
def __len__ (self):
return len(self.X_data)
train_data = trainData(torch.FloatTensor(X_train),
torch.FloatTensor(y_train))
## test data
class testData(Dataset):
def __init__(self, X_data):
self.X_data = X_data
def __getitem__(self, index):
return self.X_data[index]
def __len__ (self):
return len(self.X_data)
test_data = testData(torch.FloatTensor(X_test))
train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(dataset=test_data, batch_size=1)
class binaryClassification(nn.Module):
def __init__(self):
super(binaryClassification, self).__init__()
# Number of input features is 197*233.
self.layer_1 = nn.Linear(45901, 128)
self.layer_2 = nn.Linear(128, 64)
self.layer_3 = nn.Linear(64, 64)
self.layer_out = nn.Linear(64, 1)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(p=0.1)
self.batchnorm1 = nn.BatchNorm1d(1)
self.batchnorm2 = nn.BatchNorm1d(1)
self.batchnorm3 = nn.BatchNorm1d(1)
def forward(self, inputs):
x = self.relu(self.layer_1(inputs))
x = self.batchnorm1(x)
x = self.relu(self.layer_2(x))
x = self.batchnorm2(x)
x = self.dropout(x)
x = self.relu(self.layer_3(x))
x = self.batchnorm3(x)
x = self.dropout(x)
x = self.layer_out(x)
return x
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#device=torch.device("cpu")
print(device)
model = binaryClassification()
model.to(device)
print(model)
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
def binary_acc(y_pred, y_test):
y_pred_tag = torch.round(torch.sigmoid(y_pred))
correct_results_sum = (y_pred_tag == y_test).sum().float()
acc = correct_results_sum/y_test.shape[0]
acc = torch.round(acc * 100)
return acc
model.train()
for e in range(1, EPOCHS+1):
epoch_loss = 0
epoch_acc = 0
for X_batch, y_batch in train_loader:
X_batch, y_batch = X_batch.to(device), y_batch.to(device)
optimizer.zero_grad()
y_pred = model(X_batch)
loss = criterion(y_pred, y_batch.unsqueeze(1))
acc = binary_acc(y_pred, y_batch.unsqueeze(1))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
print(f'Epoch {e+0:03}: | Loss: {epoch_loss/len(train_loader):.5f} | Acc: {epoch_acc/len(train_loader):.3f}')
y_pred_list = []
model.eval()
with torch.no_grad():
for X_batch in test_loader:
X_batch = X_batch.to(device)
y_test_pred = model(X_batch)
y_test_pred = torch.sigmoid(y_test_pred)
y_pred_tag = torch.round(y_test_pred)
y_pred_list.append(y_pred_tag.cpu().numpy())
y_pred_list = [a.squeeze().tolist() for a in y_pred_list]
confusion_matrix(y_test, y_pred_list)
cm1=confusion_matrix(y_test, y_pred_list)
plot_confusion_matrix(cm1,(0,1))
print(classification_report(y_test, y_pred_list))
###Output
_____no_output_____
###Markdown
Binary Classification with all lesion slicesExtracting and separating all lesions and lesion free slices top view
###Code
# All Slices
image_count=imageArray.shape[0]
all_lesion_slices=np.array([])
all_zero_slices=np.array([])
all_lesion_masks=np.array([])
zero_sum=0
for i in range(image_count):
# 1. Get 3 tuples with coordinate for each dimension
lesion_index=np.array(maskArray[i,:,:,:].nonzero())
# 2. Get index of slices with lesions along the top view
top_index=np.unique(lesion_index[2])
# 3. Get index of slices without lesions along top view
top_lf_index=np.delete(np.arange(maskArray.shape[3]),top_index,axis=0)
# 4. Create a scan with lesion and lesion free slices with the top view
zero_slices =imageArray[i,:,:,top_lf_index]
lesion_slices=imageArray[i,:,:,top_index]
lesion_masks=maskArray[i,:,:,top_index]
# 5. Zero mask sum for validation- all zero slice masks should = 0
zero_sum+=np.sum(maskArray[i,:,:,top_lf_index])
# All lesion slices top view combined
all_lesion_slices=np.concatenate((all_lesion_slices, lesion_slices), axis=0)\
if all_lesion_slices.size else lesion_slices
all_zero_slices=np.concatenate((all_zero_slices, zero_slices), axis=0)\
if all_zero_slices.size else zero_slices
# All lesion masks top view combined
all_lesion_masks=np.concatenate((all_lesion_masks, lesion_masks), axis=0)\
if all_lesion_masks.size else lesion_masks
print(f"all lesion slices: {all_lesion_slices.shape}, all_zero_slices: {all_zero_slices.shape}")
# Xval formulation
l_slice_num=all_lesion_slices.shape[0]
target1=np.ones(l_slice_num)
zero_slice_num=all_zero_slices.shape[0]//3 + 1
target2=np.zeros(zero_slice_num)
target=np.concatenate((target1,target2),axis=0)
train_x1=all_lesion_slices.reshape(l_slice_num,1,197*233)
train_x2=all_zero_slices[::3,:,:].reshape(zero_slice_num,1,197*233)
# turn to torch and concatenate afterwards
train_x1=torch.from_numpy(train_x1).float()
train_x2=torch.from_numpy(train_x2).float()
train_x=torch.cat([train_x1,train_x2],dim=0)
zeros_x2=torch.zeros_like(train_x2)
# means = train_x.mean(dim=1, keepdim=True)
# stds = train_x.std(dim=1, keepdim=True)
# normalized_data = (train_x - means) / stds
# train_x=normalized_data
train_y1=target.reshape(len(target),1)
train_y=torch.from_numpy(train_y1).float()
X_train, X_test, y_train, y_test = train_test_split(train_x, train_y, test_size=0.2, random_state=69)
#X_train, X_val,y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=69)
mask_x1=all_lesion_masks.reshape(l_slice_num,1,197*233)
# creating a mask array for later illustration
mask_x1=torch.from_numpy(mask_x1).float()
mask_x=torch.cat([mask_x1,zeros_x2],dim=0)
# To free up RAM
all_lesion_slices=[]
all_zero_slices=[]
all_lesion_masks=[]
print(train_x.shape)
kfold=KFold(shuffle=True)
train_index_list=[]
test_index_list=[]
for train_index,test_index in kfold.split(train_x, train_y):
print("TRAIN:", train_index, "TEST:", test_index)
train_index_list.append(train_index)
test_index_list.append(test_index)
X_train=train_x[train_index_list[0]]
y_train=train_y[train_index_list[0]]
X_test=train_x[test_index_list[0]]
y_test=train_y[test_index_list[0]]
# mask for illustration
mask_train=mask_x[train_index_list[0]]
mask_test=mask_x[test_index_list[0]]
train_index_list.shape
###Output
_____no_output_____
###Markdown
Code block for CrossVal *Make sure to run the code blocks for Binary classification below before running this*
###Code
# Cross validation loops
reports=[]
c_matrix=[]
for fold in range(len(train_index_list)):
X_train=train_x[train_index_list[fold]]
y_train=train_y[train_index_list[fold]]
X_test=train_x[test_index_list[fold]]
y_test=train_y[test_index_list[fold]]
EPOCHS = 30
BATCH_SIZE = 32
LEARNING_RATE = 0.01
train_data = trainData(torch.FloatTensor(X_train),
torch.FloatTensor(y_train))
test_data = testData(torch.FloatTensor(X_test))
train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(dataset=test_data, batch_size=1)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = binaryClassification()
model.to(device)
print(model)
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
model.train()
for e in range(1, EPOCHS+1):
epoch_loss = 0
epoch_acc = 0
for X_batch, y_batch in train_loader:
X_batch, y_batch = X_batch.to(device), y_batch.to(device)
optimizer.zero_grad()
y_pred = model(X_batch)
loss = criterion(y_pred, y_batch.unsqueeze(1))
acc = binary_acc(y_pred, y_batch.unsqueeze(1))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
print(f'Epoch {e+0:03}: | Loss: {epoch_loss/len(train_loader):.5f} | Acc: {epoch_acc/len(train_loader):.3f}')
y_pred_list = []
tested_x=[]
model.eval()
with torch.no_grad():
for X_batch in test_loader:
X_batch = X_batch.to(device)
y_test_pred = model(X_batch)
y_test_pred = torch.sigmoid(y_test_pred)
y_pred_tag = torch.round(y_test_pred)
y_pred_list.append(y_pred_tag.cpu().numpy())
tested_x.append(X_batch.cpu().numpy())
y_pred_list = [a.squeeze().tolist() for a in y_pred_list]
cm=confusion_matrix(y_test, y_pred_list)
result=classification_report(y_test, y_pred_list,output_dict=True)
reports.append(result)
c_matrix.append(cm)
###Output
binaryClassification(
(layer_1): Linear(in_features=45901, out_features=128, bias=True)
(layer_2): Linear(in_features=128, out_features=64, bias=True)
(layer_3): Linear(in_features=64, out_features=64, bias=True)
(layer_out): Linear(in_features=64, out_features=1, bias=True)
(relu): ReLU()
(dropout): Dropout(p=0.1, inplace=False)
(batchnorm1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm2): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm3): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
Epoch 001: | Loss: 0.47225 | Acc: 77.060
Epoch 002: | Loss: 0.39946 | Acc: 82.032
Epoch 003: | Loss: 0.35998 | Acc: 84.261
Epoch 004: | Loss: 0.31840 | Acc: 86.786
Epoch 005: | Loss: 0.29233 | Acc: 87.968
Epoch 006: | Loss: 0.26192 | Acc: 89.397
Epoch 007: | Loss: 0.24640 | Acc: 89.806
Epoch 008: | Loss: 0.21842 | Acc: 91.273
Epoch 009: | Loss: 0.20731 | Acc: 91.970
Epoch 010: | Loss: 0.18410 | Acc: 92.822
Epoch 011: | Loss: 0.15195 | Acc: 94.413
Epoch 012: | Loss: 0.13947 | Acc: 94.904
Epoch 013: | Loss: 0.12993 | Acc: 95.108
Epoch 014: | Loss: 0.11963 | Acc: 95.629
Epoch 015: | Loss: 0.12011 | Acc: 95.671
Epoch 016: | Loss: 0.10375 | Acc: 96.399
Epoch 017: | Loss: 0.08974 | Acc: 96.806
Epoch 018: | Loss: 0.07752 | Acc: 97.477
Epoch 019: | Loss: 0.07964 | Acc: 97.078
Epoch 020: | Loss: 0.07856 | Acc: 97.281
Epoch 021: | Loss: 0.07262 | Acc: 97.515
Epoch 022: | Loss: 0.07295 | Acc: 97.479
Epoch 023: | Loss: 0.07024 | Acc: 97.643
Epoch 024: | Loss: 0.07595 | Acc: 97.359
Epoch 025: | Loss: 0.06574 | Acc: 97.659
Epoch 026: | Loss: 0.06423 | Acc: 97.798
Epoch 027: | Loss: 0.06056 | Acc: 97.834
Epoch 028: | Loss: 0.05605 | Acc: 98.040
Epoch 029: | Loss: 0.05757 | Acc: 98.100
Epoch 030: | Loss: 0.05488 | Acc: 98.176
binaryClassification(
(layer_1): Linear(in_features=45901, out_features=128, bias=True)
(layer_2): Linear(in_features=128, out_features=64, bias=True)
(layer_3): Linear(in_features=64, out_features=64, bias=True)
(layer_out): Linear(in_features=64, out_features=1, bias=True)
(relu): ReLU()
(dropout): Dropout(p=0.1, inplace=False)
(batchnorm1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm2): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm3): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
Epoch 001: | Loss: 0.46964 | Acc: 77.337
Epoch 002: | Loss: 0.40281 | Acc: 81.637
Epoch 003: | Loss: 0.36051 | Acc: 84.120
Epoch 004: | Loss: 0.32708 | Acc: 86.012
Epoch 005: | Loss: 0.29642 | Acc: 87.445
Epoch 006: | Loss: 0.26304 | Acc: 88.808
Epoch 007: | Loss: 0.23144 | Acc: 90.794
Epoch 008: | Loss: 0.20597 | Acc: 91.838
Epoch 009: | Loss: 0.19178 | Acc: 92.649
Epoch 010: | Loss: 0.17198 | Acc: 93.391
Epoch 011: | Loss: 0.16565 | Acc: 93.579
Epoch 012: | Loss: 0.14629 | Acc: 94.559
Epoch 013: | Loss: 0.13884 | Acc: 94.926
Epoch 014: | Loss: 0.12098 | Acc: 95.667
Epoch 015: | Loss: 0.10062 | Acc: 96.251
Epoch 016: | Loss: 0.11322 | Acc: 95.852
Epoch 017: | Loss: 0.10653 | Acc: 96.148
Epoch 018: | Loss: 0.09103 | Acc: 96.663
Epoch 019: | Loss: 0.10278 | Acc: 96.319
Epoch 020: | Loss: 0.08000 | Acc: 97.146
Epoch 021: | Loss: 0.08070 | Acc: 97.166
Epoch 022: | Loss: 0.08224 | Acc: 96.952
Epoch 023: | Loss: 0.07685 | Acc: 97.222
Epoch 024: | Loss: 0.08333 | Acc: 97.124
Epoch 025: | Loss: 0.07276 | Acc: 97.433
Epoch 026: | Loss: 0.05565 | Acc: 98.056
Epoch 027: | Loss: 0.06748 | Acc: 97.523
Epoch 028: | Loss: 0.05734 | Acc: 97.926
Epoch 029: | Loss: 0.06048 | Acc: 97.890
Epoch 030: | Loss: 0.05214 | Acc: 98.226
binaryClassification(
(layer_1): Linear(in_features=45901, out_features=128, bias=True)
(layer_2): Linear(in_features=128, out_features=64, bias=True)
(layer_3): Linear(in_features=64, out_features=64, bias=True)
(layer_out): Linear(in_features=64, out_features=1, bias=True)
(relu): ReLU()
(dropout): Dropout(p=0.1, inplace=False)
(batchnorm1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm2): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm3): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
Epoch 001: | Loss: 0.47183 | Acc: 77.176
Epoch 002: | Loss: 0.38920 | Acc: 82.689
Epoch 003: | Loss: 0.35089 | Acc: 84.417
Epoch 004: | Loss: 0.30787 | Acc: 86.772
Epoch 005: | Loss: 0.27648 | Acc: 88.106
Epoch 006: | Loss: 0.24911 | Acc: 89.495
Epoch 007: | Loss: 0.22412 | Acc: 90.784
Epoch 008: | Loss: 0.20467 | Acc: 91.768
Epoch 009: | Loss: 0.18750 | Acc: 92.521
Epoch 010: | Loss: 0.17264 | Acc: 93.148
Epoch 011: | Loss: 0.15621 | Acc: 93.898
Epoch 012: | Loss: 0.14098 | Acc: 94.487
Epoch 013: | Loss: 0.13165 | Acc: 94.970
Epoch 014: | Loss: 0.12000 | Acc: 95.467
Epoch 015: | Loss: 0.11419 | Acc: 95.774
Epoch 016: | Loss: 0.10563 | Acc: 96.152
Epoch 017: | Loss: 0.09893 | Acc: 96.469
Epoch 018: | Loss: 0.09396 | Acc: 96.619
Epoch 019: | Loss: 0.08132 | Acc: 97.068
Epoch 020: | Loss: 0.07804 | Acc: 97.170
Epoch 021: | Loss: 0.07138 | Acc: 97.435
Epoch 022: | Loss: 0.06863 | Acc: 97.537
Epoch 023: | Loss: 0.07118 | Acc: 97.521
Epoch 024: | Loss: 0.05414 | Acc: 98.050
Epoch 025: | Loss: 0.06403 | Acc: 97.697
Epoch 026: | Loss: 0.05525 | Acc: 98.048
Epoch 027: | Loss: 0.05322 | Acc: 98.100
Epoch 028: | Loss: 0.05335 | Acc: 98.034
Epoch 029: | Loss: 0.04884 | Acc: 98.230
Epoch 030: | Loss: 0.05107 | Acc: 98.180
binaryClassification(
(layer_1): Linear(in_features=45901, out_features=128, bias=True)
(layer_2): Linear(in_features=128, out_features=64, bias=True)
(layer_3): Linear(in_features=64, out_features=64, bias=True)
(layer_out): Linear(in_features=64, out_features=1, bias=True)
(relu): ReLU()
(dropout): Dropout(p=0.1, inplace=False)
(batchnorm1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm2): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm3): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
Epoch 001: | Loss: 0.47525 | Acc: 76.886
Epoch 002: | Loss: 0.41638 | Acc: 81.044
Epoch 003: | Loss: 0.37950 | Acc: 83.108
Epoch 004: | Loss: 0.34632 | Acc: 85.273
Epoch 005: | Loss: 0.31856 | Acc: 86.333
Epoch 006: | Loss: 0.28977 | Acc: 87.741
Epoch 007: | Loss: 0.26932 | Acc: 88.741
Epoch 008: | Loss: 0.24911 | Acc: 89.794
Epoch 009: | Loss: 0.21698 | Acc: 91.697
Epoch 010: | Loss: 0.19470 | Acc: 92.439
Epoch 011: | Loss: 0.17707 | Acc: 93.285
Epoch 012: | Loss: 0.16707 | Acc: 93.655
Epoch 013: | Loss: 0.14655 | Acc: 94.764
Epoch 014: | Loss: 0.13938 | Acc: 94.922
Epoch 015: | Loss: 0.14562 | Acc: 94.629
Epoch 016: | Loss: 0.12127 | Acc: 95.597
Epoch 017: | Loss: 0.10333 | Acc: 96.301
Epoch 018: | Loss: 0.10850 | Acc: 96.022
Epoch 019: | Loss: 0.10056 | Acc: 96.337
Epoch 020: | Loss: 0.09717 | Acc: 96.399
Epoch 021: | Loss: 0.08454 | Acc: 96.848
Epoch 022: | Loss: 0.08247 | Acc: 97.170
Epoch 023: | Loss: 0.07430 | Acc: 97.345
Epoch 024: | Loss: 0.07727 | Acc: 97.377
Epoch 025: | Loss: 0.06923 | Acc: 97.641
Epoch 026: | Loss: 0.06369 | Acc: 97.824
Epoch 027: | Loss: 0.09319 | Acc: 96.792
Epoch 028: | Loss: 0.08133 | Acc: 97.012
Epoch 029: | Loss: 0.07535 | Acc: 97.319
Epoch 030: | Loss: 0.06065 | Acc: 97.960
binaryClassification(
(layer_1): Linear(in_features=45901, out_features=128, bias=True)
(layer_2): Linear(in_features=128, out_features=64, bias=True)
(layer_3): Linear(in_features=64, out_features=64, bias=True)
(layer_out): Linear(in_features=64, out_features=1, bias=True)
(relu): ReLU()
(dropout): Dropout(p=0.1, inplace=False)
(batchnorm1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm2): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm3): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
Epoch 001: | Loss: 0.47124 | Acc: 77.178
Epoch 002: | Loss: 0.40228 | Acc: 81.884
Epoch 003: | Loss: 0.35675 | Acc: 84.321
###Markdown
CrossVal results
###Code
print(result)
print(reports[0])
# Dataframe for all results
all_runs=pd.DataFrame(reports[0])
all_runs1=pd.DataFrame(reports[1])
all_runs2=pd.DataFrame(reports[2])
all_runs3=pd.DataFrame(reports[3])
all_runs4=pd.DataFrame(reports[4])
all_runs_sum=all_runs+all_runs1+all_runs2+all_runs3+all_runs4
all_runs_ave=all_runs_sum/5
all_runs_ave
###Output
_____no_output_____
###Markdown
Binary classifier
###Code
EPOCHS = 30
BATCH_SIZE = 32
LEARNING_RATE = 0.01
class trainData(Dataset):
def __init__(self, X_data, y_data):
self.X_data = X_data
self.y_data = y_data
def __getitem__(self, index):
return self.X_data[index], self.y_data[index]
def __len__ (self):
return len(self.X_data)
## test data
class testData(Dataset):
def __init__(self, X_data):
self.X_data = X_data
def __getitem__(self, index):
return self.X_data[index]
def __len__ (self):
return len(self.X_data)
train_data = trainData(torch.FloatTensor(X_train),
torch.FloatTensor(y_train))
test_data = testData(torch.FloatTensor(X_test))
train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(dataset=test_data, batch_size=1)
class binaryClassification(nn.Module):
def __init__(self):
super(binaryClassification, self).__init__()
# Number of input features is 197*233.
self.layer_1 = nn.Linear(45901, 128)
self.layer_2 = nn.Linear(128, 64)
self.layer_3 = nn.Linear(64, 64)
self.layer_out = nn.Linear(64, 1)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(p=0.1)
self.batchnorm1 = nn.BatchNorm1d(1)
self.batchnorm2 = nn.BatchNorm1d(1)
self.batchnorm3 = nn.BatchNorm1d(1)
def forward(self, inputs):
x = self.relu(self.layer_1(inputs))
x = self.batchnorm1(x)
x = self.relu(self.layer_2(x))
x = self.batchnorm2(x)
x = self.dropout(x)
x = self.relu(self.layer_3(x))
x = self.batchnorm3(x)
x = self.dropout(x)
x = self.layer_out(x)
return x
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#device=torch.device("cpu")
print(device)
model = binaryClassification()
model.to(device)
print(model)
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
def binary_acc(y_pred, y_test):
y_pred_tag = torch.round(torch.sigmoid(y_pred))
correct_results_sum = (y_pred_tag == y_test).sum().float()
acc = correct_results_sum/y_test.shape[0]
acc = torch.round(acc * 100)
return acc
model.train()
for e in range(1, EPOCHS+1):
epoch_loss = 0
epoch_acc = 0
for X_batch, y_batch in train_loader:
X_batch, y_batch = X_batch.to(device), y_batch.to(device)
optimizer.zero_grad()
y_pred = model(X_batch)
loss = criterion(y_pred, y_batch.unsqueeze(1))
acc = binary_acc(y_pred, y_batch.unsqueeze(1))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
print(f'Epoch {e+0:03}: | Loss: {epoch_loss/len(train_loader):.5f} | Acc: {epoch_acc/len(train_loader):.3f}')
y_pred_list = []
tested_x=[]
model.eval()
with torch.no_grad():
for X_batch in test_loader:
X_batch = X_batch.to(device)
y_test_pred = model(X_batch)
y_test_pred = torch.sigmoid(y_test_pred)
y_pred_tag = torch.round(y_test_pred)
y_pred_list.append(y_pred_tag.cpu().numpy())
tested_x.append(X_batch.cpu().numpy())
y_pred_list = [a.squeeze().tolist() for a in y_pred_list]
l=len(tested_x)
print(l)
test_input=np.array(X_test).reshape(l,197,233)
tested_mask=np.array(mask_test).reshape(l,197,233)
tested_mask.shape
Tested_X= np.array(tested_x).reshape(l,197,233)
Tested_X.shape
# Comparing the input set with the set provided by dataloader to confirm order still intact
test_input==Tested_X
# can no longer validate with non zero function due to images being added.
Tested_X=Tested_X+tested_mask
y_pred_np=np.array(y_pred_list)
y_test_np=np.array(y_test).reshape(len(y_pred_np))
test=y_test_np-y_pred_np
# which didn't match with true
incorrect_index=np.nonzero(test)
# Which scans were incorrect
incorrect_xset=Tested_X[incorrect_index[0],:,:]
# what was the true label when prediction false?
labels_p0 = y_test_np[incorrect_index[0]]
# Incorrectly classified lesions index
p0_t1_index=np.nonzero(labels_p0)
p0_t1_xset=incorrect_xset[p0_t1_index[0],:,:]
p0_t1=labels_p0[p0_t1_index]
p1_t0_xset=np.delete(incorrect_xset,p0_t1_index,axis=0)
p1_t0=np.delete(labels_p0,p0_t1_index,axis=0)
# Which scans were correctly classified
correct_xset = np.delete(Tested_X,incorrect_index,axis=0)
labels_p1 = np.delete(y_test_np,incorrect_index,axis=0)
p1_t1_index = np.nonzero(labels_p1)
p1_t1_xset = correct_xset[p1_t1_index[0],:,:]
p1_t1 = labels_p1[p1_t1_index]
p0_t0_xset=np.delete(correct_xset,p1_t1_index,axis=0)
p0_t0=np.delete(labels_p1,p1_t1_index,axis=0)
print(incorrect_xset.shape[0],"+", correct_xset.shape[0],"=",\
Tested_X.shape[0])
max(p0_t1)
matplotlib.rcParams['font.size']=12
fig=plt.figure(figsize=(10,10))
plt.subplot(221)
i=np.random.randint(0, len(p1_t0_xset))
plt.imshow(p1_t0_xset[i],cmap='Greys_r')
plt.title(f"Incorrect Pred.,True label: No lesion" )
plt.subplot(222)
j=np.random.randint(0, len(p0_t1_xset))
plt.title(f"Incorrect Pred., True label: Lesion" )
plt.imshow(p0_t1_xset[j],cmap='Greys_r')
plt.show()
fig=plt.figure(figsize=(10,10))
plt.subplot(223)
i=np.random.randint(0, len(p1_t1_xset))
plt.imshow(p1_t1_xset[i],cmap='Greys_r')
plt.title(f"Correct Pred.,True label: Lesion" )
plt.subplot(224)
j=np.random.randint(0, len(p0_t0_xset))
plt.title(f"Correct Pred., True label: No Lesion" )
plt.imshow(p0_t0_xset[j],cmap='Greys_r')
plt.show()
plt.tight_layout()
cm=confusion_matrix(y_test, y_pred_list)
type(cm)
from plotcm import plot_confusion_matrix
plot_confusion_matrix(cm,(0,1))
print(classification_report(y_test, y_pred_list))
result1=classification_report(y_test, y_pred_list)
print(result1)
###Output
_____no_output_____
###Markdown
Binary classification with CNN
###Code
# Conv layer dimension calculator
h=197
w=233
p=1
k=3
s=1
h=(h-k+2*p)/s
# Module has been put into Net- inheriting characteristics - like cat(animal) would pass animal characteristics to cat class
# What attributes does Module already have ~\anaconda3\lib\site-packages\torch\nn\modules\module.py
class Net(Module):
def __init__(self):
super(Net, self).__init__()
self.cnn_layers=Sequential(
# Defining a 2D Convolution layer o 196x232
Conv2d(1,5,kernel_size=3,stride=1,padding=1),
BatchNorm2d(5),
ReLU(inplace=True),
# input 196,232, output 97,115
MaxPool2d(kernel_size=2, stride=2),
# Defining another 2D convolution layer o 96,114
Conv2d(5,5,kernel_size=3, stride=1, padding=1),
BatchNorm2d(5),
ReLU(inplace=True),
# input 96,114 output 47,56
MaxPool2d(kernel_size=2, stride=2)
)
self.linear_layers = Sequential(
Linear(14210, 64),
Linear(64,1)
)
# Defining the forward pass
def forward(self, x):
x = self.cnn_layers(x)
x = x.view(x.size(0), -1)
#print(x.size())
x = self.linear_layers(x)
return x
x=torch.randn(1,1,197,233)
model=Net()
model(x)
l_slice_num=all_lesion_slices.shape[0]
target1=np.ones(l_slice_num)
zero_slice_num=all_zero_slices.shape[0]//3 + 1
target2=np.zeros(zero_slice_num)
target=np.concatenate((target1,target2),axis=0)
train_x1=all_lesion_slices.reshape(l_slice_num,1,197,233)
train_x2=all_zero_slices[::3,:,:].reshape(zero_slice_num,1,197,233)
# turn to torch and concatenate
train_x1=torch.from_numpy(train_x1).float()
train_x2=torch.from_numpy(train_x2).float()
train_x=torch.cat([train_x1,train_x2],dim=0)
# means = train_x.mean(dim=1, keepdim=True)
# stds = train_x.std(dim=1, keepdim=True)
# normalized_data = (train_x - means) / stds
# train_x=normalized_data
train_y1=target.reshape(len(target))
train_y=torch.from_numpy(train_y1).float()
train_x1=lesion_sections.reshape(220,1,197,233)
train_x2=lesion_sections2.reshape(220,1,197,233)
# turn to torch and concatenate
train_x1=torch.from_numpy(train_x1).float()
train_x2=torch.from_numpy(train_x2).float()
train_x=torch.cat([train_x1,train_x2],dim=0)
train_y1=target1.reshape(440)
train_y=torch.from_numpy(train_y1).float()
target
X_train, X_test, y_train, y_test = train_test_split(train_x, train_y, test_size=0.33, random_state=69)
class trainData(Dataset):
def __init__(self, X_data, y_data):
self.X_data = X_data
self.y_data = y_data
def __getitem__(self, index):
return self.X_data[index], self.y_data[index]
def __len__ (self):
return len(self.X_data)
train_data = trainData(torch.FloatTensor(X_train),
torch.FloatTensor(y_train))
## test data
class testData(Dataset):
def __init__(self, X_data):
self.X_data = X_data
def __getitem__(self, index):
return self.X_data[index]
def __len__ (self):
return len(self.X_data)
test_data = testData(torch.FloatTensor(X_test))
train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(dataset=test_data, batch_size=1)
model = Net()
model.to(device)
print(model)
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
def binary_acc(y_pred, y_test):
y_pred_tag = torch.round(torch.sigmoid(y_pred))
correct_results_sum = (y_pred_tag == y_test).sum().float()
acc = correct_results_sum/y_test.shape[0]
acc = torch.round(acc * 100)
return acc
model.train()
for e in range(1, EPOCHS+1):
epoch_loss = 0
epoch_acc = 0
for X_batch, y_batch in train_loader:
X_batch, y_batch = X_batch.to(device), y_batch.to(device)
optimizer.zero_grad()
y_pred = model(X_batch)
loss = criterion(y_pred, y_batch.unsqueeze(1))
acc = binary_acc(y_pred, y_batch.unsqueeze(1))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
print(f'Epoch {e+0:03}: | Loss: {epoch_loss/len(train_loader):.5f} | Acc: {epoch_acc/len(train_loader):.3f}')
y_pred_list = []
model.eval()
with torch.no_grad():
for X_batch in test_loader:
X_batch = X_batch.to(device)
y_test_pred = model(X_batch)
y_test_pred = torch.sigmoid(y_test_pred)
y_pred_tag = torch.round(y_test_pred)
y_pred_list.append(y_pred_tag.cpu().numpy())
y_pred_list = [a.squeeze().tolist() for a in y_pred_list]
cm2=confusion_matrix(y_test, y_pred_list)
from plotcm import plot_confusion_matrix
plot_confusion_matrix(cm2,(0,1))
print(classification_report(y_test, y_pred_list))
###Output
_____no_output_____
###Markdown
Segmentation with CNN
###Code
from loss import enhanced_mixing_loss
###Output
_____no_output_____
###Markdown
*Access data:*
###Code
# All Slices
image_count=imageArray.shape[0]
all_lesion_slices=np.array([])
all_zero_slices=np.array([])
all_lesion_masks=np.array([])
all_zero_masks=np.array([])
zero_sum=0
for i in range(image_count):
# 1. Get 3 tuples with coordinate for each dimension
lesion_index=np.array(maskArray[i,:,:,:].nonzero())
# 2. Get index of slices with lesions along the top view
top_index=np.unique(lesion_index[2])
# 3. Get index of slices without lesions along top view
top_lf_index=np.delete(np.arange(maskArray.shape[3]),top_index,axis=0)
# 4. Create a scan with lesion and lesion free slices with the top view
zero_slices =imageArray[i,:-1,:-1,top_lf_index]
lesion_slices=imageArray[i,:-1,:-1,top_index]
lesion_masks=maskArray[i,:-1,:-1,top_index]
zero_masks=maskArray[i,:-1,:-1,top_lf_index]
# 5. Create a zero mask sum for validation- all zero slice masks = 0
#zero_sum+=np.sum(maskArray[i,:,:,top_lf_index])
# 6. All top view lesion and zero slices combined
all_lesion_slices=np.concatenate((all_lesion_slices, lesion_slices), axis=0)\
if all_lesion_slices.size else lesion_slices
if i%3==0:
all_zero_slices=np.concatenate((all_zero_slices, zero_slices), axis=0)\
if all_zero_slices.size else zero_slices
# 7. All lesion masks top view combined
all_lesion_masks=np.concatenate((all_lesion_masks, lesion_masks), axis=0)\
if all_lesion_masks.size else lesion_masks
print("all_lesion_masks: ",all_lesion_masks.shape)
print("all_lesion_slices: ",all_lesion_slices.shape)
print("all_zero_slices: ",all_zero_slices.shape)
np.max(all_lesion_masks)
###Output
_____no_output_____
###Markdown
Setting up training and test sets
###Code
l_slice_num=all_lesion_slices.shape[0]
zero_slice_num=all_zero_slices.shape[0]
train_x1=all_lesion_slices.reshape(l_slice_num,1,196,232)/254
train_x2=all_zero_slices.reshape(zero_slice_num,1,196,232)/254
train_y1=all_lesion_masks.reshape(l_slice_num,196,232)/254
train_y2=np.zeros(zero_slice_num*196*232).reshape(zero_slice_num,196,232)
# turn to torch and concatenate
train_x1=torch.from_numpy(train_x1).float()
train_x2=torch.from_numpy(train_x2).float()
train_x=torch.cat([train_x1,train_x2],dim=0)
train_y1=torch.from_numpy(train_y1).float()
train_y2=torch.from_numpy(train_y2).float()
train_y=torch.cat([train_y1,train_y2],dim=0)
# means = train_x.mean(dim=1, keepdim=True)
# stds = train_x.std(dim=1, keepdim=True)
# normalized_data = (train_x - means) / stds
# train_x=normalized_data
###Output
_____no_output_____
###Markdown
CNN
###Code
# Module has been put into Net- inheriting characteristics - like cat(animal) would pass animal characteristics to cat class
# What attributes does Module already have ~\anaconda3\lib\site-packages\torch\nn\modules\module.py
class Net(Module):
def __init__(self):
super(Net, self).__init__()
self.cnn_layers=Sequential(
# Defining a 2D Convolution layer o 196x232
Conv2d(1,5,kernel_size=3,stride=1,padding=1),
BatchNorm2d(5),
ReLU(inplace=True),
# input 196,232, output 97,115
MaxPool2d(kernel_size=2, stride=2),
# Defining another 2D convolution layer o 96,114
Conv2d(5,5,kernel_size=3, stride=1, padding=1),
BatchNorm2d(5),
ReLU(inplace=True),
# input 96,114 output 47,56
MaxPool2d(kernel_size=2, stride=2),
nn.Upsample(scale_factor=2, mode='nearest'),
Conv2d(5, 5, kernel_size=3, padding=1),
ReLU(),
nn.Upsample(scale_factor=2, mode='nearest'),
Conv2d(5, 5, kernel_size=3, padding=0),
ReLU(),
nn.Conv2d(5, 1, kernel_size=1, padding=1),
nn.Sigmoid()
)
# Defining the forward pass
def forward(self, x):
x = self.cnn_layers(x)
#print(x.size())
return x
X_train, X_test, y_train, y_test = train_test_split(train_x, train_y, test_size=0.33, random_state=69)
x=torch.randn(1,1,197,233)
n=Net()
n(x)
class trainData(Dataset):
def __init__(self, X_data, y_data):
self.X_data = X_data
self.y_data = y_data
def __getitem__(self, index):
return self.X_data[index], self.y_data[index]
def __len__ (self):
return len(self.X_data)
train_data = trainData(torch.FloatTensor(X_train),
torch.FloatTensor(y_train))
## test data
class testData(Dataset):
def __init__(self, X_data):
self.X_data = X_data
def __getitem__(self, index):
return self.X_data[index]
def __len__ (self):
return len(self.X_data)
test_data = testData(torch.FloatTensor(X_test))
EPOCHS = 30
BATCH_SIZE = 1
LEARNING_RATE = 0.001
train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(dataset=test_data, batch_size=1)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#device=torch.device("cpu")
print(device)
model = Net()
model.to(device)
print(model)
#criterion = enhanced_mixing_loss()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
def binary_acc(y_pred, y_test):
y_pred_tag = torch.round(torch.sigmoid(y_pred))
correct_results_sum = (y_pred_tag == y_test).sum().float()
acc = correct_results_sum/y_test.shape[0]
acc = torch.round(acc * 100)
return acc
model.train()
for e in range(1, EPOCHS+1):
epoch_loss = 0
epoch_acc = 0
for X_batch, y_batch in train_loader:
X_batch, y_batch = X_batch.to(device), y_batch.to(device)
optimizer.zero_grad()
y_pred = model(X_batch)
loss = enhanced_mixing_loss(y_batch,y_pred)
#acc = binary_acc(y_pred, y_batch.unsqueeze(1))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
#epoch_acc += acc.item()
# | Acc: {epoch_acc/len(train_loader):.3f}
print(f'Epoch {e+0:03}: | Loss: {epoch_loss/len(train_loader):.5f} ')
y_pred_list = []
model.eval()
with torch.no_grad():
for X_batch in test_loader:
X_batch = X_batch.to(device)
y_test_pred = model(X_batch)
y_test_pred = torch.sigmoid(y_test_pred)
y_pred_tag = torch.round(y_test_pred)
y_pred_list.append(y_pred_tag.cpu().numpy())
y_pred_list = [a.squeeze().tolist() for a in y_pred_list]
cm2=confusion_matrix(y_test, y_pred_list)
from plotcm import plot_confusion_matrix
plot_confusion_matrix(cm2,(0,1))
print(classification_report(y_test, y_pred_list))
###Output
_____no_output_____
|
Data_experimentation.ipynb
|
###Markdown
Oil & Gas Data Install Dependencies
###Code
!pip install pandas
!pip install xlrd
import pandas as pd
import os
os.listdir('./data/')
###Output
_____no_output_____
###Markdown
Load Data to a a pandas DataFrame
###Code
df = pd.read_excel('./data/20210309_2020_1 - 4.xls', sheet_name="Sheet1")
df.head()
df[df['API WELL NUMBER']== 34013209230000]
###Output
_____no_output_____
###Markdown
Groupy and Aggregate the Data
###Code
grouped_df = df.groupby(['API WELL NUMBER', 'Production Year'])
grouped_df.head()
df.columns
agg_df = grouped_df.agg({
'OWNER NAME': 'first',
'COUNTY': 'first',
'TOWNSHIP': 'first',
'WELL NAME': 'first',
'WELL NUMBER': 'first',
'OIL': 'sum',
'GAS': 'sum',
'BRINE': 'sum',
}).reset_index()
agg_df
###Output
_____no_output_____
###Markdown
Validate the results
###Code
df[df['API WELL NUMBER']== 34059242540000]
agg_df.loc[agg_df['API WELL NUMBER']== 34059242540000]
###Output
_____no_output_____
###Markdown
Save Aggregated Data to csv for Future Reference
###Code
agg_df.to_csv('./data/aggregated_annual_data_2020.csv', encoding='utf-8', index=False)
###Output
_____no_output_____
###Markdown
--- Load Data to SQLite Database
###Code
import sqlite3 as sql
conn = sql.connect('./data/oil_and_gas.db')
agg_df.to_sql('annual_report_2020', conn, index=False)
###Output
_____no_output_____
###Markdown
Validate data from SQLite
###Code
annual_report = pd.read_sql('SELECT * FROM annual_report_2020', conn)
annual_report.head()
annual_report.loc[annual_report['API WELL NUMBER']== 34059242540000]
###Output
_____no_output_____
|
autoscan/notebooks/keras-cnn-pipeline.ipynb
|
###Markdown
Data Preprocessing
###Code
!ls -lh ../data/raw/
with open("../data/raw/all_object_data_in_dictionary_format.pkl", "rb") as pickled_data:
all_data = pickle.load(pickled_data)
X, y = all_data["images"], all_data["targets"]
scaler = preprocessing.MinMaxScaler()
Z = scaler.fit_transform(X.reshape(-1, 3 * 51**2))
training_features, testing_features, training_target, testing_target = model_selection.train_test_split(Z, y, test_size=0.2)
training_features.shape
testing_features.shape
###Output
_____no_output_____
###Markdown
Start with a simple DNNStart with a simple Deep Neural Network (DNN) with a single hidden layer as a benchmark. A simple DNN is able to achieve over 90% accuracy and recall on the test set! Unlike classical ML approaches which require expensive to obtain hand-engineered features, this simple DNN works with the raw image data.
###Code
model_fn = keras.models.Sequential([
keras.layers.Flatten(data_format="channels_first", input_shape=(3, 51, 51)),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dense(1, activation="sigmoid")
])
_metrics = [
keras.metrics.BinaryAccuracy(),
keras.metrics.Recall()
]
model_fn.compile(optimizer="adam", loss="binary_crossentropy", metrics=_metrics)
model_fn.summary()
model_fn.fit(training_features.reshape((-1, 3, 51, 51)), training_target, epochs=2)
model_fn.evaluate(testing_features.reshape((-1, 3, 51, 51)), testing_target)
###Output
178882/178882 [==============================] - 36s 199us/sample - loss: 0.1834 - binary_accuracy: 0.9294 - recall_2: 0.8952
###Markdown
Improve upon DNN by adding convolutionsShow how we can improve performance by adding convolutional layers to our model.
###Code
model_fn = keras.models.Sequential([
keras.layers.Conv2D(filters=16, kernel_size=(3,3), data_format="channels_first", input_shape=(3, 51, 51)),
keras.layers.ReLU(),
keras.layers.MaxPool2D(pool_size=(2,2), data_format="channels_first"),
keras.layers.Conv2D(filters=32, kernel_size=(3,3), data_format="channels_first"),
keras.layers.ReLU(),
keras.layers.MaxPool2D(pool_size=(2,2), data_format="channels_first"),
keras.layers.Conv2D(filters=64, kernel_size=(3,3), data_format="channels_first"),
keras.layers.ReLU(),
keras.layers.MaxPool2D(pool_size=(2,2), data_format="channels_first"),
keras.layers.Flatten(data_format="channels_first"),
keras.layers.Dense(128),
keras.layers.ReLU(),
keras.layers.Dense(1, activation="sigmoid")
])
_metrics = [
keras.metrics.BinaryAccuracy(),
keras.metrics.Recall(),
]
model_fn.compile(optimizer="adam", loss="binary_crossentropy", metrics=_metrics)
model_fn.summary()
model_fn.fit(training_features.reshape((-1, 3, 51, 51)), training_target, epochs=10)
model_fn.evaluate(testing_features.reshape((-1, 3, 51, 51)), testing_target)
###Output
178882/178882 [==============================] - 27s 153us/sample - loss: 0.0966 - binary_accuracy: 0.9677 - recall_11: 0.9657
###Markdown
Improve speed of convergence by adding batch normalization?
###Code
model_fn = keras.models.Sequential([
keras.layers.Conv2D(filters=16, kernel_size=(3,3), data_format="channels_first", input_shape=(3, 51, 51)),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.MaxPool2D(pool_size=(2,2), data_format="channels_first"),
keras.layers.Conv2D(filters=32, kernel_size=(3,3), data_format="channels_first"),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.MaxPool2D(pool_size=(2,2), data_format="channels_first"),
keras.layers.Conv2D(filters=64, kernel_size=(3,3), data_format="channels_first"),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.MaxPool2D(pool_size=(2,2), data_format="channels_first"),
keras.layers.Flatten(data_format="channels_first"),
keras.layers.Dense(128),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.Dense(1, activation="sigmoid")
])
_metrics = [
keras.metrics.BinaryAccuracy(),
keras.metrics.Recall(),
]
model_fn.compile(optimizer="adam", loss="binary_crossentropy", metrics=_metrics)
model_fn.summary()
model_fn.fit(training_features.reshape((-1, 3, 51, 51)), training_target, epochs=10)
model_fn.evaluate(testing_features.reshape((-1, 3, 51, 51)), testing_target)
###Output
178882/178882 [==============================] - 29s 164us/sample - loss: 0.0890 - binary_accuracy: 0.9697 - recall_10: 0.9764
|
work/04_preprocess.ipynb
|
###Markdown
**handson用資料としての注意点**普通、同じセル上で何度も試行錯誤するので、最終的に上手くいったセルしか残らず、失敗したセルは残りませんし、わざわざ残しません。今回はhandson用に 試行・思考過程を残したいと思い、エラーやミスが出ても下のセルに進んで処理を実行するようにしています。notebookのセル単位の実行ができるからこそのやり方かもしれません。良い。(下のセルから文は常体で書きます。)kunai (@jdgthjdg)--- preprocess(前処理) というタイトル、今までのは pre-preprocess
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import japanize_matplotlib
import pandas as pd
import numpy as np
import qgrid
#設定でDataFrameなどが長く表示されないようにします(画面領域の消費を抑えてhandsonをしやすくするため)
# 長い場合の途中の省略表示(...)を出す閾値の設定(折り返しとは無関係)
pd.set_option('max_rows',10)
pd.set_option('max_columns',20) # これを超えたら全部は表示しない。 A B C ... X Y Z のように途中を省く。
###Output
_____no_output_____
###Markdown
ここまでの処理で生成したDataFrameをpickleで保存したのでそれを読む pickle にしておくと読み込みも一瞬。 最初からcsvなどを読んで成形して・・・を行うコードを書かなくても良いので、一時的なセーブデータとしては重宝する!(日付のパースなどをやり直さなくて良いので高速)
###Code
kafun = pd.read_pickle("kafun03.pkl")
kafun.head()
kafun.tail()
###Output
_____no_output_____
###Markdown
**もしかして、NaNしかない行が多い?** 全部が NaN の行を消去する。 [dropna](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html) を使うデフォルトでは axisで設定した方向の中に、一つでもNaNがあると消去されてしまう。how="any" ドキュメントを見る。
###Code
kafun = kafun.dropna(axis=0, how="all")
kafun.tail()
###Output
_____no_output_____
###Markdown
--- データがNaNばかりの列を排除したい。 各列の非NaNのデータ数を数えてplotする
###Code
kafun.shape
kafun.count().plot(figsize=(20,3));
###Output
_____no_output_____
###Markdown
非NaNのデータ数が 25000以下の列にさようならをする どうやって絞るか?
###Code
kafun.count() > 25000
###Output
_____no_output_____
###Markdown
これで取れるデータは・・・・**columns ( 列 ) の bool 配列** Columns/Rowsの bool によるの選択には loc が使える!
###Code
col_above_25k = kafun.count()>25000
col_above_25k
kafun.loc[: , col_above_25k]
###Output
_____no_output_____
###Markdown
--- copy() して、新たなdfを作成する。 copyをしないとコピー元データへの参照が残り・・・ SettingWithCopyWarning となったら、対症療法的に .copy() を試してみる(乱暴) なぜcopyするのか。。。 SettingWithCopyWarning などで怒られる
###Code
kafun = kafun.loc[:, col_above_25k].copy()
kafun.head() # すっきり
###Output
_____no_output_____
###Markdown
--- NaNも0に変更する fillna() 異常値の扱い方は色々あるだろうが、今回はとりあえず、NaNも異常値(負値)も0にする [fillna()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html)
###Code
kafun = kafun.fillna(0)
kafun.head()
###Output
_____no_output_____
###Markdown
--- 負の値を0に変更する**忘れた頃に登場**- -9998:降雪による欠測- -9997:黄砂による欠測- -9996:その他の欠測(前後の時間や周辺観測値と比較して不自然なデータ)- 空白:未観測または通信障害による欠測負の値を0に実現する方法は色々あるが、 思いつきやすいのだと、 min/max系もっと楽なやり方はないか・・・ ドキュメントを探していると・・・ [clip()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.clip.html)
###Code
kafun = kafun.clip(lower=0.0) # とてもらく
kafun.head()
###Output
_____no_output_____
###Markdown
pickle にて保存
###Code
kafun.to_pickle("kafun04.pkl")
###Output
_____no_output_____
###Markdown
グラフをみる
###Code
kafun.plot(figsize=(20,9), legend=True);
###Output
_____no_output_____
###Markdown
2013年の奈良強すぎでは? [plot()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html) のオプションも調べる
###Code
kafun.plot(figsize=(16,15), subplots=True);
###Output
_____no_output_____
|
scheduler/simulator_files/analysis/analysis.ipynb
|
###Markdown
Single simulation analysis
###Code
out_trace_file = "../example-out-trace.csv"
cycle_time_ms = 30000
df_a = prepare_df(pandas.read_csv(out_trace_file))
user_running_a = df_a.groupby("user").apply(running_concurrently).reset_index().sort_values("time_ms")
user_waiting_a = df_a.groupby("user").apply(waiting_over_time).reset_index().sort_values("time_ms")
usage_df_a = prepare_usage_df(user_running_a, user_waiting_a, cycle_time_ms)
df_a.head()
score_card(df_a, user_running_a, user_waiting_a, cycle_time_ms).transpose()
###Output
_____no_output_____
###Markdown
Point in time analysis
###Code
out_trace_file = "../example-out-trace.csv"
df_a = prepare_df(pandas.read_csv(out_trace_file))
[per_host, per_user, waiting, running_at, df_a] = point_in_time_analysis(df_a, df_a.start_time_ms.median())
per_host.mem.describe()
per_user.sort_values("mem")
waiting
running_at
###Output
_____no_output_____
###Markdown
Comparing simulation runs
###Code
cycle_time_ms = 30000
df_a = prepare_df(pandas.read_csv("../example-out-trace.csv"))
user_running_a = df_a.groupby("user").apply(running_concurrently).reset_index().sort_values("time_ms")
user_waiting_a = df_a.groupby("user").apply(waiting_over_time).reset_index().sort_values("time_ms")
usage_df_a = prepare_usage_df(user_running_a, user_waiting_a, cycle_time_ms)
df_b = prepare_df(pandas.read_csv("../example-out-trace.csv"))
user_running_b = df_b.groupby("user").apply(running_concurrently).reset_index().sort_values("time_ms")
user_waiting_b = df_b.groupby("user").apply(waiting_over_time).reset_index().sort_values("time_ms")
usage_df_b = prepare_usage_df(user_running_b, user_waiting_b, cycle_time_ms)
scores = pandas.concat([score_card(df_a, user_running_a, user_waiting_a, cycle_time_ms),
score_card(df_b, user_running_b, user_waiting_b, cycle_time_ms)]).transpose()
scores.columns = ["a", "b"]
scores['improvement_a_to_b'] = (scores.b - scores.a)/scores.a
scores
bins = np.linspace(0,1,20)
ax = usage_df_a[usage_df_a.fair_ratio > 0].fair_ratio.hist(bins=bins, label="a", alpha=0.8)
usage_df_b[usage_df_b.fair_ratio > 0].fair_ratio.hist(bins=bins, ax=ax, label="b", alpha=0.8)
plt.xlim([0.,0.99])
plt.legend()
plt.xlabel("memory running over fair allocation")
plt.ylabel("frequency")
plt.title("distribution of memory running over fair allocation")
ax = usage_df_a[usage_df_a.fair_ratio > 0].groupby("time_ms").fair_ratio.median().plot(label="a", alpha=0.8)
usage_df_b[usage_df_b.fair_ratio > 0].groupby("time_ms").fair_ratio.median().plot(ax=ax, label="b", alpha=0.8)
plt.legend()
plt.xlabel("time from beginning of sim (milliseconds)")
plt.ylabel("median memory running over fair allocation")
plt.title("memory running over fair allocation over time")
bins = 100
ax = usage_df_a[usage_df_a.starved_mem_gb > 0].starved_mem_gb.hist(bins=bins, label="a", alpha=0.8)
usage_df_b[usage_df_b.starved_mem_gb > 0].starved_mem_gb.hist(bins=bins, ax=ax, label="b", alpha=0.8)
plt.legend()
plt.xlabel("Starved memory (gb)")
plt.ylabel("frequency")
plt.title("distribution of starvation")
ax = usage_df_a[usage_df_a.starved_mem_gb > 0].groupby('time_ms').starved_mem_log10.median().plot(label="a", alpha=0.8)
usage_df_b[usage_df_b.starved_mem_gb > 0].groupby('time_ms').starved_mem_log10.median().plot(label="b", alpha=0.8)
plt.legend()
plt.xlabel("time from beginning of sim (milliseconds)")
plt.ylabel("median log starved memory (gb)")
plt.title("log starvation over time")
bins = range(20)
plt.hist(df_a.overhead/cycle_time_ms, label="a", alpha=0.8, bins = bins)
plt.hist(df_b.overhead/cycle_time_ms, label="b", alpha=0.8, bins = bins)
plt.legend()
plt.xlabel("Cycles until scheduled")
plt.ylabel("frequency")
plt.title("Distribution of cycles until scheduled")
###Output
_____no_output_____
|
Unit_2_Build_Week_Project.ipynb
|
###Markdown
###Code
import pandas as pd
df = pd.read_csv('all-states-history.csv')
df.head()
import numpy as np
import datetime
from sklearn.preprocessing import OrdinalEncoder
def wrangle(df):
temp = df.copy()
temp['state'] = OrdinalEncoder().fit_transform(np.array(temp['state']).reshape(-1,1))
quality_dict = {'F': 0, 'D': 1, 'C': 2, 'B': 3, 'A': 4, 'A+': 5}
temp['dataQualityGrade'] = list(map(quality_dict.get, temp['dataQualityGrade']))
temp2 = []
for _, row in temp[['death', 'deathConfirmed', 'deathProbable']].iterrows():
nan_list = list(map(np.isnan, row))
if sum(nan_list) == 1:
if nan_list[0]:
row['death'] = row['deathConfirmed'] + row['deathProbable']
elif nan_list[1]:
row['deathConfirmed'] = row['death'] - row['deathProbable']
else:
row['deathProbable'] = row['death'] - row['deathConfirmed']
temp2.append(row)
temp[['death', 'deathConfirmed', 'deathProbable']] = temp2
temp.interpolate(method = 'linear', limit_direction = 'backward', limit = 30, inplace = True)
temp.fillna(0, inplace = True)
temp['date'] = pd.to_datetime(temp['date'], format = '%Y%m%d')
temp['date'] = temp['date'].map(datetime.datetime.toordinal)
num_days = 10
target = []
for index, row in temp.iterrows():
date = row['date']
state = row['state']
min_date = min(temp[temp['state'] == state]['date'])
new_row = []
for i in range(min(num_days, int(date - 4 - min_date))):
previous_day_deaths = temp[(temp['date'] == date - i - 5) & (temp['state'] == state)]['positive'].values[0]
new_row.append(previous_day_deaths)
if len(new_row) < num_days:
for i in range(num_days - len(new_row)):
new_row.append(0)
target.append(new_row)
column_names = [str(i + 5) + ' Days Ago' for i in range(num_days)]
target = pd.DataFrame(target, columns = column_names)
temp.drop('date', axis = 1, inplace = True)
return temp, target
X, y = wrangle(df)
X.head()
y.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size = 0.25)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators = 300,
max_features = 'sqrt',
max_depth = 14)
model.fit(X_train, y_train)
from sklearn.metrics import explained_variance_score as evs
y_pred = model.predict(X_train)
print("% of training variance explained: " + str(evs(y_train, y_pred)))
y_pred = model.predict(X_val)
print("% of validation variance explained: " + str(evs(y_val, y_pred)))
y_pred = model.predict(X_test)
print("% of test variance explained: " + str(evs(y_test, y_pred)))
y_pred = []
for _, row in y_test.iterrows():
average = row.mean()
new_row = [average for _ in range(len(row))]
y_pred.append(new_row)
y_pred = pd.DataFrame(y_pred)
print("Baseline explained variance: " + str(evs(y_test, y_pred)))
print(model.predict(X_test))
print(pd.DataFrame(model.predict(X_test)).columns)
import matplotlib.pyplot as plt
def custom_pdp(df, model, target_var, gradations, title):
if target_var not in df.columns:
return None
minimum = min(df[target_var])
maximum = max(df[target_var])
values = [minimum + i * (maximum - minimum) / (gradations - 1) for i in range(gradations)]
predictions = []
for value in values:
df2 = df.copy()
df2[target_var] = value
y_pred = pd.DataFrame(model.predict(df2))
averages = [y_pred[column].mean() for column in y_pred.columns]
predictions.append(averages)
plt.figure(figsize=(10, 6))
plt.title(title)
plt.plot(values, predictions)
custom_pdp(X_test, model, 'death', 10, 'Partial Dependence Plot for Death Count')
custom_pdp(X_test, model, 'hospitalized', 10, 'Partial Dependence Plot for Hospitalization Rate')
custom_pdp(X_test, model, 'positive', 10, 'Partial Dependence Plot for Patients Testing Positive')
custom_pdp(X_test, model, 'negative', 10, 'Partial Dependence Plot for Patients Testing Negative')
custom_pdp(X_test, model, 'totalTestResults', 10, 'Partial Dependence Plot for Patients Tested')
###Output
_____no_output_____
|
jupyter_notebooks/Real_Models_ipnyb.ipynb
|
###Markdown
**If you want to know how much will be the loss for your model go to https://www.wolframalpha.com/ and type ln(classes) [ Eg. ln(10) if classes = 10 will e 2.302585 ] the resulting predicted loss value will be in this loss neighbourhood i.e 0 to the value you get, in ln(10) will be 0 to 2.302585** x -> y1x -> y **Packages**
###Code
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, mean_squared_error
from tqdm import tqdm_notebook
from sklearn.preprocessing import OneHotEncoder
from sklearn.datasets import make_blobs
def sigmoid(x, w, b):
return 1/(1+np.exp(-(w*x + b )))
###Output
_____no_output_____
###Markdown
[sky, earth, sand , water][1, 0, 0, 0] -> sky[0, 1, ,0 0] -> earth[0, 0 , 1, 0] -> sand[0, 0, 0, 1] -> water[ [1, 0, 0, 0] [0, 1, ,0 0] [0, 0 , 1, 0] [0, 0, 0, 1]]
###Code
x = 2
w = 0.5
b = 1
y = sigmoid(x, w, b)
print(y)
x = np.linspace(-10, 10, 100)
y = sigmoid(x, w, b)
plt.plot(x,y)
plt.show()
def sigmoid_multi(x1, x2, w1, w2, b):
return 1/(1 + np.exp(-(w1*x1 + w2*x2 + b)))
sigmoid_multi(1, 0, 0.5, 0, 0)
x1 = np.linspace(-10, 10, 100)
x2 = np.linspace(-10, 10, 100)
xx1, xx2 = np.meshgrid(x1, x2)
print(x1.shape, x2.shape, xx1.shape, xx2.shape)
###Output
(100,) (100,) (100, 100) (100, 100)
###Markdown
**Loss Calculation**
###Code
def loss_fn1(X , Y, W, B):
loss = 0
for x,y in zip(X,Y):
loss += (y - sigmoid(x, W, B))**2
return loss
###Output
_____no_output_____
###Markdown
**Sigmoid Neuron**
###Code
class SigmoidNeuron:
def __init__(self):
self.w = None
self.b = None
def perceptron(self, x):
return np.dot(x, self.w.T) + self.b
def sigmoid(self, x):
return 1.0/(1.0 + np.exp(-x))
def grad_w_mse(self, x, y):
y_pred = self.sigmoid(self.perceptron(x))
return (y_pred - y) * y_pred * (1 - y_pred) * x
def grad_b_mse(self, x, y):
y_pred = self.sigmoid(self.perceptron(x))
return (y_pred - y) * y_pred * (1 - y_pred)
def grad_w_ce(self, x, y):
y_pred = self.sigmoid(self.perceptron(x))
if y == 0:
return y_pred * x
elif y == 1:
return -1 * (1 - y_pred) * x
else:
raise ValueError("y should be 0 or 1")
def grad_b_ce(self, x, y):
y_pred = self.sigmoid(self.perceptron(x))
if y == 0:
return y_pred
elif y == 1:
return -1 * (1 - y_pred)
else:
raise ValueError("y should be 0 or 1")
def fit(self, X, Y, epochs=1, learning_rate=.05, initialise=True, loss_fn="mse", display_loss=False):
# initialise w, b
if initialise:
self.w = np.random.randn(1, X.shape[1])
self.b = 0
if display_loss:
loss = {}
for i in tqdm_notebook(range(epochs), total=epochs, unit="epoch"):
dw = 0
db = 0
for x, y in zip(X, Y):# 1--1
if loss_fn == "mse":
dw += self.grad_w_mse(x, y)
db += self.grad_b_mse(x, y)
elif loss_fn == "ce":
self.grad_b_ce(x, y)
dw += self.grad_w_ce(x, y)
db += self
m = X.shape[1]
self.w -= dw - (learning_rate * (dw/m) ) #Assiginig dynamic weights
self.b -= learning_rate * db/m # self.b = self.b+learning rate +dw/m
if display_loss:
Y_pred = self.sigmoid(self.perceptron(X))
if loss_fn == "mse":
loss[i] = mean_squared_error(Y, Y_pred)
elif loss_fn == "ce":
loss[i] = log_loss(Y, Y_pred)
if display_loss:
plt.plot(np.array(list(loss.values())).astype(float))
plt.xlabel('Epochs')
if loss_fn == "mse":
plt.ylabel('Mean Squared Error')
elif loss_fn == "ce":
plt.ylabel('Log Loss')
plt.show()
def predict(self, X):
Y_pred = []
for x in X:
y_pred = self.sigmoid(self.perceptron(x))
Y_pred.append(y_pred)
return np.array(Y_pred)
###Output
_____no_output_____
###Markdown
**Model Training and testing with Data**
###Code
X = np.asarray([[2.5, 2.5], [4, -1], [1, -4], [-3, 1.25], [-2, -4], [1, 5]])
Y = [1, 1, 1, 0, 0, 0,1]
sn = SigmoidNeuron()
sn.fit(X, Y, 5, 0.25, True )
def plot_sn(X, Y, sn, ax):
X1 = np.linspace(-10, 10, 100)
X2 = np.linspace(-10, 10, 100)
XX1, XX2 = np.meshgrid(X1, X2)
YY = np.zeros(XX1.shape)
for i in range(X2.size):
for j in range(X1.size):
val = np.asarray([X1[j], X2[i]])
YY[i, j] = sn.sigmoid(sn.perceptron(val))
ax.contourf(XX1, XX2, YY, cmap=my_cmap, alpha=0.6)
ax.scatter(X[:,0], X[:,1],c=Y, cmap=my_cmap)
ax.plot()
###Output
_____no_output_____
###Markdown
**Real data**
###Code
data = pd.read_csv('/content/drive/My Drive/DL_Course/pima-indians-diabetes.csv')
data.head()
data.shape
X = data.drop('1', axis = 1)
Y = data['1'].values
print(Y)
print(X)
###Output
6 148 72 35 0 33.6 0.627 50
0 1 85 66 29 0 26.6 0.351 31
1 8 183 64 0 0 23.3 0.672 32
2 1 89 66 23 94 28.1 0.167 21
3 0 137 40 35 168 43.1 2.288 33
4 5 116 74 0 0 25.6 0.201 30
.. .. ... .. .. ... ... ... ..
762 10 101 76 48 180 32.9 0.171 63
763 2 122 70 27 0 36.8 0.340 27
764 5 121 72 23 112 26.2 0.245 30
765 1 126 60 0 0 30.1 0.349 47
766 1 93 70 31 0 30.4 0.315 23
[767 rows x 8 columns]
###Markdown
**Standardisation**
###Code
R = np.random.random([100, 1])
plt.plot(R)
plt.show()
print( 'mean=', np.mean(R), ' standard deviation =', np.std(R) )
scaler = StandardScaler()
scaler.fit(R)
scaler.mean_
RT = scaler.transform(R)
print( 'mean=', np.mean(RT), ' standard deviation =', np.std(RT) )
plt.plot(RT)
plt.show()
###Output
_____no_output_____
###Markdown
**Test Train Split**
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state=0, stratify=Y)
print(X_test.shape, Y_test.shape)
scaler = StandardScaler()
X_scaled_train = scaler.fit_transform(X_train) #fit and transform can be done togther
X_scaled_test = scaler.transform(X_test)
#Used when we have real valued output
#Y_scaled_train = minmax_scaler.fit_transform(Y_train.reshape(-1, 1))
#Y_scaled_test = minmax_scaler.transform(Y_test.reshape(-1, 1))
#scaled_threshold = list(minmax_scaler.transform(np.array([threshold]).reshape(1, -1)))[0][0]
#scaled_threshold
#Y_binarised_train = (Y_scaled_train > scaled_threshold).astype("int").ravel()
#Y_binarised_test = (Y_scaled_test > scaled_threshold).astype("int").ravel()
sn = SigmoidNeuron()
sn.fit(X_scaled_train, Y_train, epochs=2000, learning_rate=0.015, display_loss=True)
Y_pred_train = sn.predict(X_scaled_train)
Y_pred_test = sn.predict(X_scaled_test)
print(Y_pred_train.ravel(), Y_pred_train.ravel())
accuracy_train = accuracy_score(Y_pred_train.round(), Y_train, )
accuracy_test = accuracy_score(Y_pred_test.round(), Y_test, )
print(accuracy_train, accuracy_test)
###Output
_____no_output_____
###Markdown
**Pytorch basics**
###Code
import torch
import numpy as np
import matplotlib.pyplot as plt
x = torch.ones(3, 3) #Creating Tensors of ones
print(x)
x = torch.zeros(3, 2)
print(x)
x = torch.rand(4, 4)
print(x)
x = torch.empty(3, 2) #Create space for tensors without initilizing values
print(x)
y = torch.zeros_like(x) #Tensor with shape as of 'X' but with '0' as values
print(y)
x = torch.linspace(0, 1, steps=5)
print(x)
x = torch.tensor([[1, 2], #Explicit tensor definition
[3, 4],
[5, 6]])
print(x)
print(x.size())
print(x[:, 1])
print(x[0, :])
y = x[1, 1]
print(y) # Value of tensor type
print(y.item()) #Used to print numerical value
print(x)
y = x.view(2, 3)
print(y)
y = x.view(8,-1) # One dimesion specified the other can be automatically set (3x5x5)
print(y)
x = torch.ones([3, 2])
y = torch.ones([3, 2])
z = y.add(x)
print(z)
print(y)
x_np = x.numpy()
print(type(x), type(x_np))
print(x_np)
a = np.random.randn(5)
print(a)
a_pt = torch.from_numpy(a)
print(type(a), type(a_pt))
print(a_pt)
%%time
for i in range(100):
a = np.random.randn(100,100)
b = np.random.randn(100,100)
c = np.matmul(a, b)
%%time
for i in range(100):
a = torch.randn([100, 100])
b = torch.randn([100, 100])
c = torch.matmul(a, b)
x_np = x.numpy()
print(type(x), type(x_np))
print(x_np)
a = np.random.randn(5)
print(a)
a_pt = torch.from_numpy(a)
print(type(a), type(a_pt))
print(a_pt)
###Output
[-0.07723755 1.06128717 0.82278356 0.46867666 1.56330511]
<class 'numpy.ndarray'> <class 'torch.Tensor'>
tensor([-0.0772, 1.0613, 0.8228, 0.4687, 1.5633], dtype=torch.float64)
###Markdown
**Automatic Differentiation (Autodiff)**
###Code
x = torch.randn([20, 1], requires_grad=True) #creates a connecton for flow of gradients
y = 3*x - 2
w = torch.tensor([1.], requires_grad=True)
b = torch.tensor([1.], requires_grad=True)
y_hat = w*x + b #Forward pass
loss = torch.sum((y_hat - y)**2)
print(loss)
loss.backward() #Backward differenciation or Backward pass
print(w.grad, b.grad) #Printing new gradients
#With CPU
%%time
learning_rate = 0.001
N = 10000000
epochs = 200
w = torch.rand([N], requires_grad=True)
b = torch.ones([1], requires_grad=True)
print(torch.mean(w).item(), b.item())
for i in range(epochs):
x = torch.randn([N])
y = torch.dot(3*torch.ones([N]), x) - 2
y_hat = torch.dot(w, x) + b #Forward Pass
loss = torch.sum((y_hat - y)**2)
loss.backward() #Backward pass
with torch.no_grad():
w -= learning_rate * w.grad #Updation of weights
b -= learning_rate * b.grad #Updation of bias
w.grad.zero_()
b.grad.zero_()
print(torch.mean(w).item(), b.item())
###Output
0.5001104474067688 1.0
0.5044001936912537 -13.2140474319458
0.5262081027030945 -91.69007110595703
0.5013394951820374 -6.995948791503906
0.4970962107181549 -32.11152648925781
0.5618374943733215 846.9442138671875
0.749310314655304 249.77691650390625
2.1567752361297607 7293.17529296875
0.0978522002696991 23639.69140625
24.892539978027344 87117.4140625
-30.232280731201172 -269897.125
-28.044902801513672 87895.15625
-1251.4873046875 -3968511.25
-1200.3961181640625 -10445907.0
-56562.6796875 137404336.0
-104698.375 22410056.0
37347.43359375 294791104.0
-115300.7578125 2315970048.0
774964.0625 -2267431424.0
4033698.25 50899255296.0
-9622653.0 477823434752.0
-548925888.0 3835293073408.0
-4788516352.0 -9446182879232.0
-28758833152.0 117389794476032.0
-42052771840.0 88861145300992.0
8856744960.0 1385925951094784.0
2201560285184.0 7834507289821184.0
1204208009216.0 -6.428190364087091e+16
14306669232128.0 -3.773982265437061e+17
10850079342592.0 -3.3599041248323174e+17
-118085906333696.0 3.5319625762943795e+17
1851844406018048.0 -3.6346302991716844e+18
5612809828171776.0 3.349924297591646e+19
1.3343879273119744e+16 -3.871882258506213e+19
1.3072751731618611e+17 -4.583472374136795e+20
1.4886610909999923e+17 -5.1022866993661556e+20
8.152506775791206e+16 2.2334790296636165e+21
-1.2833236936122434e+19 2.2548631621517854e+22
-1.287336009454091e+20 -1.9189549781849532e+23
-7.909593740322904e+19 -2.682043097606759e+23
-6.059185599715423e+19 -1.3607280397648572e+23
-4.490726017466913e+20 -1.2807169446698049e+24
-5.813515207612595e+20 -1.6634458113706764e+24
1.5237347037850003e+20 -5.698753223151844e+24
1.1773959790118199e+22 4.655177445761371e+25
1.7192492703386775e+22 2.849256419509067e+24
3.982360662618001e+22 -3.8783821935721304e+26
5.126149767900466e+22 -4.3261791195707186e+26
-3.0241340032860306e+23 4.041366556160984e+26
2.8373000110167445e+23 -4.79828737851893e+27
-2.5477564933000825e+24 -2.616451942257962e+28
-7.432201257748111e+24 -1.4898272895884609e+29
-3.536293412150731e+25 -2.5344124888633146e+29
-2.7206263507458864e+26 1.2933870442126866e+29
-1.600313709040458e+26 1.0316758360258264e+30
-2.4063340825254445e+27 -3.934579946155784e+30
-2.232344569510959e+28 -7.380497931591245e+31
-2.6027648713623368e+28 -1.4346985192715953e+31
-2.9472619785243257e+28 -2.8666476510871903e+32
-8.292566213355568e+28 -1.5691168534451034e+33
-6.2661466206302325e+29 -1.6568490450592387e+34
nan -1.4440141735834352e+35
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
CPU times: user 37.1 s, sys: 0 ns, total: 37.1 s
Wall time: 37 s
###Markdown
**CUDA Accleration**
###Code
print(torch.cuda.device_count()) #No of GPUs
print(torch.cuda.device(0)) #Printing device
print(torch.cuda.get_device_name(0)) #Printing Device name
cuda0 = torch.device('cuda:0') #Assign GPU for task
a = torch.ones(3, 2, device=cuda0)#Creating Tensor on GPU
b = torch.ones(3, 2, device=cuda0)
c = a + b
print(c)
%%time
for i in range(10):
a = torch.randn([10000, 10000], device=cuda0)
b = torch.randn([10000, 10000], device=cuda0)
torch.matmul(a, b)
#With GPU
%%time
learning_rate = 0.001
N = 10000000
epochs = 200
w = torch.rand([N], requires_grad=True, device=cuda0)
b = torch.ones([1], requires_grad=True, device=cuda0)
print(torch.mean(w).item(), b.item())
for i in range(epochs):
x = torch.randn([N], device=cuda0)
y = torch.dot(3*torch.ones([N], device=cuda0), x) - 2
y_hat = torch.dot(w, x) + b #Forward Pass
loss = torch.sum((y_hat - y)**2)
loss.backward() #Backward pass
with torch.no_grad():
w -= learning_rate * w.grad #Updation of weights
b -= learning_rate * b.grad #Updation of bias
w.grad.zero_()
b.grad.zero_()
print(torch.mean(w).item(), b.item())
###Output
0.49993446469306946 1.0
0.5101795196533203 -22.40991973876953
0.5452715158462524 -154.79995727539062
0.4665493071079254 738.7906494140625
0.7926381230354309 6237.61767578125
-0.04708560183644295 21587.00390625
14.779309272766113 130220.171875
680.521484375 1163917.0
2581.61083984375 -4311336.0
309.6868896484375 26068930.0
-50585.45703125 314575744.0
-244173.84375 768827712.0
-270577.75 -1017325504.0
535963.25 2779884544.0
9030191.0 23227740160.0
12492233.0 -123806744576.0
48826044.0 -209460723712.0
521746080.0 -2620250128384.0
-1392474368.0 -8402665209856.0
-6120658432.0 -23540204044288.0
-6664338432.0 -15530675142656.0
629741888.0 -59324336439296.0
-2594528000.0 48036575182848.0
-131401424896.0 -604582078054400.0
56022208512.0 -4698788711104512.0
5475800186880.0 -2.155957618475008e+16
-5831217643520.0 1.7582144410877952e+16
-2981975818240.0 1.2125878087581696e+16
8002400681984.0 -2.0111761607124582e+17
-191201508589568.0 1.21136247176849e+18
-597396463550464.0 -4.786178513755439e+18
-4417433993478144.0 -1.1064058295649462e+20
-6.04589661356032e+16 1.443602120430825e+20
2.9590417703960576e+17 2.2691402203629963e+21
-9.183232784334848e+16 6.175858226607199e+21
-3.122367007157125e+18 2.285859186587163e+22
-9.77361053634737e+18 3.4978226571288557e+22
-2.561430424453015e+19 7.642849059867833e+22
-1.2431416013705183e+19 2.1899068440062953e+21
-6.234175953908531e+19 2.6087652082137988e+23
-9.395034829057583e+19 -7.856741399274302e+23
5.697893027741717e+20 -6.05408766609694e+24
-3.957574482823819e+21 -3.777926159850748e+25
-1.0114252497862996e+23 2.286633123337655e+26
9.724118272222847e+23 -2.4129408208089317e+27
8.464253210000549e+23 -2.667645346411321e+27
3.1600607104906044e+24 4.70376212981457e+27
4.84102458766303e+24 7.591917788847678e+27
1.443730677933988e+24 2.098419672766721e+28
-2.6961336863020185e+25 9.261579759594378e+28
-3.540151917608629e+26 6.718998898978314e+29
-7.929510766696921e+26 2.890781908705227e+30
-2.0740970061375386e+27 -2.24808551092678e+30
-2.063827334776823e+27 -5.751837540431949e+30
-1.9081714651587825e+28 5.5390921045145685e+31
-2.5709807476310585e+28 4.32593020882801e+31
9.213506068798765e+28 6.327394332715303e+32
7.960833946138765e+29 -4.112752485216051e+33
8.967666658802965e+29 -2.5610989424201367e+33
1.4486305922071916e+29 3.608329057408437e+34
-1.4155753679791852e+30 3.035057099139951e+34
nan 1.5836527206275916e+35
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
nan nan
CPU times: user 927 ms, sys: 601 ms, total: 1.53 s
Wall time: 1.54 s
###Markdown
**Data Creation**
###Code
my_cmap = matplotlib.colors.LinearSegmentedColormap.from_list("", ["red","yellow","green"])
data, labels = make_blobs(n_samples=1000, centers=4, n_features=2, random_state=0)
print(data.shape, labels.shape)
###Output
(1000, 2) (1000,)
###Markdown
**Multiclass Classification**
###Code
plt.scatter(data[:,0], data[:,1], c=labels, cmap=my_cmap)
plt.show()
#labels_orig = labels
# labels = np.mod(labels_orig, 2)
X_train, X_val, Y_train, Y_val = train_test_split(data, labels, stratify=labels, random_state=0)
print(X_train.shape, X_val.shape, labels.shape)
###Output
(750, 2) (250, 2) (1000,)
###Markdown
**Binary Classificaion Data**
###Code
plt.scatter(data[:,0], data[:,1], c=labels, cmap=my_cmap)
plt.show()
X_train, X_val, Y_train, Y_val = train_test_split(data, labels, stratify=labels, random_state=0)
print(X_train.shape, X_val.shape)
###Output
(750, 2) (250, 2)
###Markdown
**FNN Network** **FNN W/O Framework**
###Code
#Non-Framewrork FNN
class FFNetwork:
def __init__(self, W1, W2):
self.params={}
self.params["W1"]=W1.copy()
self.params["W2"]=W2.copy()
self.params["B1"]=np.zeros((1,2))
self.params["B2"]=np.zeros((1,4))
self.num_layers=2
self.gradients={}
self.update_params={}
self.prev_update_params={}
for i in range(1,self.num_layers+1):
self.update_params["v_w"+str(i)]=0
self.update_params["v_b"+str(i)]=0
self.update_params["m_b"+str(i)]=0
self.update_params["m_w"+str(i)]=0
self.prev_update_params["v_w"+str(i)]=0
self.prev_update_params["v_b"+str(i)]=0
def forward_activation(self, X):
return 1.0/(1.0 + np.exp(-X))
def grad_activation(self, X):
return X*(1-X)
def softmax(self, X):
exps = np.exp(X)
return exps / np.sum(exps, axis=1).reshape(-1,1)
def forward_pass(self, X, params = None):
if params is None:
params = self.params
self.A1 = np.matmul(X, params["W1"]) + params["B1"] # (N, 2) * (2, 2) -> (N, 2)
self.H1 = self.forward_activation(self.A1) # (N, 2)
self.A2 = np.matmul(self.H1, params["W2"]) + params["B2"] # (N, 2) * (2, 4) -> (N, 4)
self.H2 = self.softmax(self.A2) # (N, 4)
return self.H2
def grad(self, X, Y, params = None):
if params is None:
params = self.params
self.forward_pass(X, params)
m = X.shape[0]
self.gradients["dA2"] = self.H2 - Y # (N, 4) - (N, 4) -> (N, 4)
self.gradients["dW2"] = np.matmul(self.H1.T, self.gradients["dA2"]) # (2, N) * (N, 4) -> (2, 4)
self.gradients["dB2"] = np.sum(self.gradients["dA2"], axis=0).reshape(1, -1) # (N, 4) -> (1, 4)
self.gradients["dH1"] = np.matmul(self.gradients["dA2"], params["W2"].T) # (N, 4) * (4, 2) -> (N, 2)
self.gradients["dA1"] = np.multiply(self.gradients["dH1"], self.grad_activation(self.H1)) # (N, 2) .* (N, 2) -> (N, 2)
self.gradients["dW1"] = np.matmul(X.T, self.gradients["dA1"]) # (2, N) * (N, 2) -> (2, 2)
self.gradients["dB1"] = np.sum(self.gradients["dA1"], axis=0).reshape(1, -1) # (N, 2) -> (1, 2)
def fit(self, X, Y, epochs=1, algo= "GD", display_loss=False,
eta=1, mini_batch_size=100, eps=1e-8,
beta=0.9, beta1=0.9, beta2=0.9, gamma=0.9 ):
if display_loss:
loss = {}
for num_epoch in tqdm_notebook(range(epochs), total=epochs, unit="epoch"):
m = X.shape[0]
if algo == "GD":
self.grad(X, Y)
for i in range(1,self.num_layers+1):
self.params["W"+str(i)] -= eta * (self.gradients["dW"+str(i)]/m)
self.params["B"+str(i)] -= eta * (self.gradients["dB"+str(i)]/m)
elif algo == "MiniBatch":
for k in range(0,m,mini_batch_size):
self.grad(X[k:k+mini_batch_size], Y[k:k+mini_batch_size])
for i in range(1,self.num_layers+1):
self.params["W"+str(i)] -= eta * (self.gradients["dW"+str(i)]/mini_batch_size)
self.params["B"+str(i)] -= eta * (self.gradients["dB"+str(i)]/mini_batch_size)
elif algo == "Momentum":
self.grad(X, Y)
for i in range(1,self.num_layers+1):
self.update_params["v_w"+str(i)] = gamma *self.update_params["v_w"+str(i)] + eta * (self.gradients["dW"+str(i)]/m)
self.update_params["v_b"+str(i)] = gamma *self.update_params["v_b"+str(i)] + eta * (self.gradients["dB"+str(i)]/m)
self.params["W"+str(i)] -= self.update_params["v_w"+str(i)]
self.params["B"+str(i)] -= self.update_params["v_b"+str(i)]
elif algo == "NAG":
temp_params = {}
for i in range(1,self.num_layers+1):
self.update_params["v_w"+str(i)]=gamma*self.prev_update_params["v_w"+str(i)]
self.update_params["v_b"+str(i)]=gamma*self.prev_update_params["v_b"+str(i)]
temp_params["W"+str(i)]=self.params["W"+str(i)]-self.update_params["v_w"+str(i)]
temp_params["B"+str(i)]=self.params["B"+str(i)]-self.update_params["v_b"+str(i)]
self.grad(X,Y,temp_params)
for i in range(1,self.num_layers+1):
self.update_params["v_w"+str(i)] = gamma *self.update_params["v_w"+str(i)] + eta * (self.gradients["dW"+str(i)]/m)
self.update_params["v_b"+str(i)] = gamma *self.update_params["v_b"+str(i)] + eta * (self.gradients["dB"+str(i)]/m)
self.params["W"+str(i)] -= eta * (self.update_params["v_w"+str(i)])
self.params["B"+str(i)] -= eta * (self.update_params["v_b"+str(i)])
self.prev_update_params=self.update_params
elif algo == "AdaGrad":
self.grad(X, Y)
for i in range(1,self.num_layers+1):
self.update_params["v_w"+str(i)] += (self.gradients["dW"+str(i)]/m)**2
self.update_params["v_b"+str(i)] += (self.gradients["dB"+str(i)]/m)**2
self.params["W"+str(i)] -= (eta/(np.sqrt(self.update_params["v_w"+str(i)])+eps)) * (self.gradients["dW"+str(i)]/m)
self.params["B"+str(i)] -= (eta/(np.sqrt(self.update_params["v_b"+str(i)])+eps)) * (self.gradients["dB"+str(i)]/m)
elif algo == "RMSProp":
self.grad(X, Y)
for i in range(1,self.num_layers+1):
self.update_params["v_w"+str(i)] = beta*self.update_params["v_w"+str(i)] +(1-beta)*((self.gradients["dW"+str(i)]/m)**2)
self.update_params["v_b"+str(i)] = beta*self.update_params["v_b"+str(i)] +(1-beta)*((self.gradients["dB"+str(i)]/m)**2)
self.params["W"+str(i)] -= (eta/(np.sqrt(self.update_params["v_w"+str(i)]+eps)))*(self.gradients["dW"+str(i)]/m)
self.params["B"+str(i)] -= (eta/(np.sqrt(self.update_params["v_b"+str(i)]+eps)))*(self.gradients["dB"+str(i)]/m)
elif algo == "Adam":
self.grad(X, Y)
num_updates=0
for i in range(1,self.num_layers+1):
num_updates+=1
self.update_params["m_w"+str(i)]=beta1*self.update_params["m_w"+str(i)]+(1-beta1)*(self.gradients["dW"+str(i)]/m)
self.update_params["v_w"+str(i)]=beta2*self.update_params["v_w"+str(i)]+(1-beta2)*((self.gradients["dW"+str(i)]/m)**2)
m_w_hat=self.update_params["m_w"+str(i)]/(1-np.power(beta1,num_updates))
v_w_hat=self.update_params["v_w"+str(i)]/(1-np.power(beta2,num_updates))
self.params["W"+str(i)] -=(eta/np.sqrt(v_w_hat+eps))*m_w_hat
self.update_params["m_b"+str(i)]=beta1*self.update_params["m_b"+str(i)]+(1-beta1)*(self.gradients["dB"+str(i)]/m)
self.update_params["v_b"+str(i)]=beta2*self.update_params["v_b"+str(i)]+(1-beta2)*((self.gradients["dB"+str(i)]/m)**2)
m_b_hat=self.update_params["m_b"+str(i)]/(1-np.power(beta1,num_updates))
v_b_hat=self.update_params["v_b"+str(i)]/(1-np.power(beta2,num_updates))
self.params["B"+str(i)] -=(eta/np.sqrt(v_b_hat+eps))*m_b_hat
if display_loss:
Y_pred = self.predict(X)
loss[num_epoch] = log_loss(np.argmax(Y, axis=1), Y_pred)
if display_loss:
plt.plot(loss.values(), '-o', markersize=5)
plt.xlabel('Epochs')
plt.ylabel('Log Loss')
plt.show()
def predict(self, X):
Y_pred = self.forward_pass(X)
return np.array(Y_pred).squeeze()
def print_accuracy():
Y_pred_train = model.predict(X_train)
Y_pred_train = np.argmax(Y_pred_train,1)
Y_pred_val = model.predict(X_val)
Y_pred_val = np.argmax(Y_pred_val,1)
accuracy_train = accuracy_score(Y_pred_train, Y_train)
accuracy_val = accuracy_score(Y_pred_val, Y_val)
print("Training accuracy", round(accuracy_train, 4))
print("Validation accuracy", round(accuracy_val, 4))
if False:
plt.scatter(X_train[:,0], X_train[:,1], c=Y_pred_train, cmap=my_cmap, s=15*(np.abs(np.sign(Y_pred_train-Y_train))+.1))
plt.show()
%%time
model = FFNetwork(W1, W2)
model.fit(X_train, y_OH_train, epochs=100, eta=1, algo="GD", display_loss=True)
print_accuracy()
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
**FNN Pytorch**
###Code
import numpy as np
import math
import matplotlib.pyplot as plt
import matplotlib.colors
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, mean_squared_error, log_loss
from tqdm import tqdm_notebook
import seaborn as sns
import time
from IPython.display import HTML
import warnings
warnings.filterwarnings('ignore')
from sklearn.preprocessing import OneHotEncoder
from sklearn.datasets import make_blobs
import torch
from torch import optim
import torch.nn as nn
torch.manual_seed(0)
X_train, Y_train, X_val, Y_val = map(torch.tensor, (X_train, Y_train, X_val, Y_val))
print(X_train.shape, Y_train.shape)
#Type matching
X_train = X_train.float()
Y_train = Y_train.long()
#Book keeping
loss_arr = []
acc_arr = []
#Class for CPU
class FNN(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.net = nn.Sequential(
nn.Linear(2, 1024*4),
nn.Sigmoid(),
nn.Linear(1024*4, 4),
nn.Softmax()
)
def forward(self, X):
return self.net(X)
def fit_v2(x, y, model, opt, loss_fn, epochs = 1000):
for i in tqdm_notebook(range(epochs), total=epochs, unit="epoch"):
loss = loss_fn(model(x), y)
loss.backward()
opt.step()
opt.zero_grad()
return loss.item()
fn = FNN()
loss_fn = nn.CrossEntropyLoss()
opt = optim.SGD(fn.parameters(), lr=1)
#Class for GPU
class FNN_L(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.net = nn.Sequential(
nn.Linear(2, 1024*4), #(in, out)
nn.Sigmoid(),
nn.Linear(1024*4, 4), #(in, out)
nn.Softmax()
)
def forward(self, X):
return self.net(X)
fn1 = FNN_L()
opt = optim.SGD(fn.parameters(), lr=1)
#CPU Training
device = torch.device("cpu")
X_train=X_train.to(device)
Y_train=Y_train.to(device)
fn.to(device)
tic = time.time()
print('Final loss', fit_v2(X_train, Y_train, fn, opt, loss_fn))
toc = time.time()
print('Time taken in seconds', toc - tic)
#GPU Training
device = torch.device("cuda")
X_train=X_train.to(device)
Y_train=Y_train.to(device)
fn1 = FNN_L()
fn1.to(device)
tic = time.time()
print('Final loss', fit_v2(X_train, Y_train, fn1, opt, loss_fn))
toc = time.time()
print('Time taken', toc - tic)
###Output
_____no_output_____
###Markdown
**CNN**
###Code
import numpy as np
import math
import matplotlib.pyplot as plt
import matplotlib.colors
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, mean_squared_error, log_loss
from tqdm import tqdm_notebook
import seaborn as sns
import time
from IPython.display import HTML
import warnings
warnings.filterwarnings('ignore')
from sklearn.preprocessing import OneHotEncoder
from sklearn.datasets import make_blobs
import torch
from torch import optim
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from torchvision import models #To invoke different models in Pytorch
import copy #used for saving checkpoints of trained model
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") #Setting GPU
print(device)
###Output
cuda:0
###Markdown
**LeNet**
###Code
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.cnn_model = nn.Sequential(
nn.Conv2d(3, 6, 5), # (N, 3, 32, 32) -> (N, 6, 28, 28) depth, no. o/p filters, kernel size 5->5x5
nn.Tanh(),
nn.AvgPool2d(2, stride=2), # (N, 6, 28, 28) -> (N, 6, 14, 14)
nn.Conv2d(6, 16, 5), # (N, 6, 14, 14) -> (N, 16, 10, 10)
nn.Tanh(),
nn.AvgPool2d(2, stride=2) # (N,16, 10, 10) -> (N, 16, 5, 5)
)
self.fc_model = nn.Sequential(
nn.Linear(400,120), # (N, 400) -> (N, 120)
nn.Tanh(),
nn.Linear(120,84), # (N, 120) -> (N, 84)
nn.Tanh(),
nn.Linear(84,10) # (N, 84) -> (N, 10)
)
def forward(self, x):
x = self.cnn_model(x)
x = x.view(x.size(0), -1) #Flattening and resizing for Fc layer (400, 16 x 5 x 5)
x = self.fc_model(x)
return x
batch_size = 128
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transforms.ToTensor())
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True) #Load train data from dataset
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transforms.ToTensor())
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False)
#[1000000000,000000000,000000000]
def evaluation(dataloader):
total, correct = 0, 0
for data in dataloader:
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
outputs = net(inputs)
_, pred = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (pred == labels).sum().item()
return 100 * correct / total
net = LeNet().to(device)
loss_fn = nn.CrossEntropyLoss()
opt = optim.Adam(net.parameters())
%%time
loss_arr = [] #Add loss to array in each iteration
loss_epoch_arr = [] # Add loss toarray in each epoch
max_epochs = 20 # 1000->iteration 10 epoch =4 40
#min_loss = 1000
for epoch in range(max_epochs):
for i, data in enumerate(trainloader, 0): #Iteating over tain data in each batch
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
opt.zero_grad()
outputs = net(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
opt.step()
loss_arr.append(loss.item())
loss_epoch_arr.append(loss.item())
print('Epoch: %d/%d, Test acc: %0.2f, Train acc: %0.2f' % (epoch, max_epochs, evaluation(testloader), evaluation(trainloader)))
plt.plot(loss_epoch_arr)
plt.show()
###Output
Epoch: 0/20, Test acc: 37.84, Train acc: 38.32
Epoch: 1/20, Test acc: 43.83, Train acc: 44.10
Epoch: 2/20, Test acc: 45.46, Train acc: 46.40
Epoch: 3/20, Test acc: 49.40, Train acc: 50.34
Epoch: 4/20, Test acc: 49.82, Train acc: 51.83
Epoch: 5/20, Test acc: 51.86, Train acc: 54.13
Epoch: 6/20, Test acc: 53.42, Train acc: 56.33
Epoch: 7/20, Test acc: 53.79, Train acc: 57.22
Epoch: 8/20, Test acc: 55.19, Train acc: 59.55
Epoch: 9/20, Test acc: 55.68, Train acc: 60.67
Epoch: 10/20, Test acc: 55.58, Train acc: 61.43
Epoch: 11/20, Test acc: 55.44, Train acc: 62.42
Epoch: 12/20, Test acc: 56.10, Train acc: 63.80
Epoch: 13/20, Test acc: 56.07, Train acc: 64.28
Epoch: 14/20, Test acc: 56.86, Train acc: 65.49
Epoch: 15/20, Test acc: 56.41, Train acc: 66.59
Epoch: 16/20, Test acc: 56.60, Train acc: 66.80
Epoch: 17/20, Test acc: 56.66, Train acc: 67.84
Epoch: 18/20, Test acc: 56.36, Train acc: 68.26
Epoch: 19/20, Test acc: 56.37, Train acc: 69.13
###Markdown
Extra
###Code
#torch.save(best_model, '/content/drive/My Drive/DL_Course/Model Checkpoints/leNet.pth')
#saved_model = torch.load('/content/drive/My Drive/DL_Course/Model Checkpoints/leNet.pth')
#net.load_state_dict(saved_model) #Invoking checkpoint model
#opt.load_state_dict(saved_model['optimizer_state_dict'])
#loss = saved_model['loss']
#print(evaluation(trainloader), evaluation(testloader))
###Output
_____no_output_____
###Markdown
**Popular CNNs Here ResNet**
###Code
transform_train = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), #Normalization
])
transform_test = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), #Normalization
])
#Defining training set
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True,
transform=transform_train)
#Defining testset
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True,
transform=transform_test)
num_classes = 10 #No of output classifications
batch_size = 16
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False)
dataiter = iter(trainloader)
images, labels = dataiter.next() #Iterate to next batch
print(images.shape) #Images in batch
print(images[1].shape) #Single image in a batch
print(images[1].dtype)
print(labels[1].item()) #Which class the image correspond to
def imshow(img, title):
npimg = img.numpy() / 2 + 0.5 #De-normalizing & convert to numpy to show image
plt.figure(figsize=(batch_size, 1)) #Setting to show images in a batch
plt.axis('off')
plt.imshow(np.transpose(npimg, (1, 2, 0))) #resizing images as (x,y,z)
plt.title(title) #Display title of shown image
plt.show()
def show_batch_images(dataloader):
images, labels = next(iter(dataloader)) #Iterating to next batch
img = torchvision.utils.make_grid(images) #Combining all images in a batch in a grid
imshow(img, title=[str(x.item()) for x in labels])
for i in range(4): #Printing images in 4 batches
show_batch_images(trainloader) #Images in 1 batch
def evaluation(dataloader, model): #Model added when using Large CNN or existing CNNs
total, correct = 0, 0
for data in dataloader:
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device) #Moving to GPU
outputs = model(inputs) #resnet(inputs)
_, pred = torch.max(outputs.data, 1) #Predicted output
total += labels.size(0)
correct += (pred == labels).sum().item() #Accuracy calculation by taking sum of correct ones
return 100 * correct / total
resnet = models.resnet18(pretrained=True) #using pretrained model
print(resnet)
for param in resnet.parameters(): #Freezing parameters to customise model
param.requires_grad = False
###Output
_____no_output_____
###Markdown
512, 1000 512, 10
###Code
in_features = resnet.fc.in_features #Customising layers for our use
resnet.fc = nn.Linear(in_features, num_classes)
#No. of parameters in modified network
for param in resnet.parameters():
if param.requires_grad:
print(param.shape)
resnet = resnet.to(device)
loss_fn = nn.CrossEntropyLoss()
opt = optim.SGD(resnet.parameters(), lr=0.01)
%%time
loss_epoch_arr = [] #Book Keeping losses
max_epochs = 4
min_loss = 1000 #Used for checkpointing
n_iters = np.ceil(50000/batch_size)
for epoch in tqdm_notebook(range(max_epochs), total=max_epochs, unit="epoch"):# for epoch in range(max_epochs): #iterating in a epoch
for i, data in enumerate(trainloader, 0): #Enumerate in a batch
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
opt.zero_grad()
outputs = resnet(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
opt.step()
#Creation of checkpoint
if min_loss > loss.item():
min_loss = loss.item()
best_model = copy.deepcopy(resnet.state_dict()) #Copy parameters of model having small loss
print('Min loss %0.2f' % min_loss)
if i % 500 == 0:
print('Iteration: %d/%d, Loss: %0.2f' % (i, n_iters, loss.item()))
del inputs, labels, outputs #Freeing GPU memory and cache for preventing VRAM overflow
torch.cuda.empty_cache()
loss_epoch_arr.append(loss.item())
print('Epoch: %d/%d, Test acc: %0.2f, Train acc: %0.2f' % (
epoch, max_epochs,
evaluation(testloader, resnet), evaluation(trainloader, resnet)))
plt.plot(loss_epoch_arr)
plt.show()
resnet.load_state_dict(torch.load('/content/drive/MyDrive/DL_Course/Model Checkpoints/resnet-trial_1.pth')) #Invoking checkpoint model
print(evaluation(trainloader, resnet), evaluation(testloader, resnet))
#saving trained model till
torch.save(best_model, '/content/drive/My Drive/DL_Course/Model Checkpoints/my-project.pth')
#Load saved model
saved_model = torch.load('/content/drive/My Drive/DL_Course/Model Checkpoints/my-project.pth')
resnet.load_state_dict(saved_model) #Invoking checkpoint model
optimizer.load_state_dict(saved_model['optimizer_state_dict'])
loss = saved_model['loss']
print(evaluation(trainloader, resnet), evaluation(testloader, resnet))
###Output
_____no_output_____
|
Fundamental-of-Data-Analytics - Project 2018.ipynb
|
###Markdown
HD Computer Science - Data Analytics Fundamental of Data Analytics - Project 2018 Noa Pereira Prada Schnor G00364704 About the project The box plot is common in data analysis for investigating individual numerical variables. In this project, I investigated and explained box plots and their uses. The boxplot function from the Python package matplotlib.pyplot was used to create box plots. This notebook contains the following topics:• Summarization of the history of the box plot and situations in which it used. • Demonstration of the use of the box plot using data of your choosing.• Explaination of any relevant terminology such as the terms quartile and percentile. • Comparison of box plot to alternatives. What is box plot?The 'Box and Whisker Plot" was introduced by the American mathematician John W. Tukey in 1969 as a diagram to visualize the 'Five Number Summary' of any data set [8]. [John Tukey] Since 1969 box plot has been used and probably is one of the most used type of graphic as it is a very effective and easy to read [1,5, 8]. A box plot is a standardized method for displaying the distribution of data based on their quartiles (minimum, first quartile Q1, median, third quartile Q3 and maximum) [2,4]. That way, a box plot summarizes the distribution of data from multiple sources in a single graph [1, 5].[Example of a box plot] A box plot allows the comparison of data from different categories and gives a range of information, such as shape, center (median) and variability of a data set and it makes easier for decision-making [5, 6]. Why and when a box plot should be used? Box plot is a graph that should be used to get a nice summary of one or more numeric variables [1]. Moreover, box plots are good at comparing distributions and at identifying outliers and their values (extreme values) [3].Box plots can also tell if the data is symmetric and how tightly the data is grouped, how the values in the data are spread out and if it skewed [4].A good situation of when a box plot should be used is when working with several data sets from different sources that are related to each other [5]. How to read the box plot?Basically the box plot shows 'Five number summary': median, 25th percentile, 75th percentile, minimum and maximum. The line inside the box represents the median (50th percentile) of the data. The end of the box represents the upper and lower quartiles (75th and 25th percentiles). The extreme lines (whiskers) represent the maximum and minimum value not including the outliers number [1]. Terminology:|Terminoloy|Definition|| --- | --- || Box | Main body of a box plot graph [9]|| Whiskers | Vertical lines that extends to the most extreme, but non-outlier data points. They represent the response variable. [3, 9]|| Caps | Horizontal lines at the end of the whiskers [9]|| Q1 First quartile/25th Percentile/Lower hinge | Middle value between the smallest value and the median (Q2/50th percentile) of a data set [4]|| Median/Q2 Second quartile/50th Percentile | Horizontal line inside the box that represents the middle value of the data set [4, 9] || Q3 Third quartile/75th Percentile/Upper hinge | Middle number between the median (Q2/50th percentile) and the highest number of a data set [4]|| IQR Interquartile range/H-spread | Range from 25th to 75th percentile (Q3 - Q1) [4]|| Step | 1.5 * IQR or H-spread [3]|| Maximum/Upper inner fence | Q3 + 1 step or 1.5 * IQR [4] || Minimum/Lower inner fence |Q1 - 1 step or 1.5 * IQR [4] || Upper outer fence | Q3 + 2 steps [3] || Lower outer fence | Q1 - 2 steps [3] || Upper adjacent | Highest value below upper inner fence/maximum [3] || Lower adjacent | Smallest number above lower inner fence/minimum [3] || Outliers/fliers | Points that represent data that extend beyond the whiskers [9] |A box plot shows basic information about the distribution of a data. The median of a symmetric data set roughly is in the middle of the box [6]. A skewed data set show a lopsided box plot, where the box is divided in two unequal pieces [6]. If the whisker is longer in the positive direction than in the negative direction or the mean is larger than the median it shows a distribution with a positive skew [3]. Demonstration of the use of the box plot Data set used to create example of a box plot The INtegrated Mapping FOr the Sustainable Development of Ireland's MArine Resource (INFOMAR) Seabed Samples Particle Size Analysis data set that contains the locations where the samples have been taken, the particle size analysis (PSA) of the samples and the sediment type classification that is based on the percentage of sand, mud and gravel (after Folk 1954).More info: [https://www.infomar.ie/]
###Code
import pandas as pd #import library
url = "https://opendata.arcgis.com/datasets/3ee4cf41133b4c54818aceb946cbac92_3.csv"
infomar = pd.read_csv(url, delimiter= ',', header = 0) #open the data set using pandas read_csv function
###Output
_____no_output_____
###Markdown
Check the first 5 rows of the data set
###Code
infomar.head (5) #check the first 5 rows using pandas head function
###Output
_____no_output_____
###Markdown
Drop not needed columns for analysis using Pandas' drop function
###Code
infomar = infomar.drop(['SURVEY', 'PSA_DSCRPT', 'INSTRUMENT', 'OBJECTID'], axis=1)
###Output
_____no_output_____
###Markdown
Summarize the data
###Code
infomar.describe() #summarize the data using pandas describe function
###Output
_____no_output_____
###Markdown
Check for missing values
###Code
infomar.isnull().values.any() #check for missing values using pandas isnull function
###Output
_____no_output_____
###Markdown
Create Box plot using Matplotlib '*Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib tries to make easy things easy and hard things possible. You can generate plots, histograms, power spectra, bar charts, errorcharts, scatterplots, etc., with just a few lines of code.*' - Matplotlib version 3.0.2 [9]. Basic matplotlib boxplot function:plt.boxplot (x, *optional parameters*) **x**, variable/data to be plotted. Some of the optional parameters briefly described below **notch=None**, default is False. To produce a notched box notch = True. **sym=None**, symbol for flierprops. Default is 'b+'. **vert=None**, default is False (horizontal plot). To produce a vertical boxplot vert = True. **whis=None**, determine the reach of the whiskers to a min and a max of the data. **positions=None**, sets the positions of the boxes. **widths=None**, sets the widht of each box. **patch_artist=None**, if True access the properties of the artists of the box plots. **bootstrap=None**, when boostrap = None, notches are calculated using a Gaussian-based asymptotic approximation. **usermedians=None**, when is usermedians = None the median of the data is computed by matplotlib as normal. **meanline=None**, when meanline = True and showmeans = True the mean is rendered as a line. **showmeans=None**, show the mean inside the box. **showcaps=None**, show the caps. **showbox=None**, show the box. **showfliers=None**, show outliers. **boxprops=None**, style of the box. **labels=None**, labels for each dataset. **flierprops=None**, show the outliers beyond the caps. **medianprops=None**, style of the median. **meanprops=None**, style of the mean. **capprops=None**, style of the caps. **whiskerprops=None**, style of the whiskers. *, **data=None**) data set Example of box plot with different parameters using INFOMAR data set
###Code
import matplotlib.pyplot as plt #import the library
df = pd.DataFrame(infomar) #using pandas function to get a dataframe from the data set
fig, axes = plt.subplots (2,3, figsize=(15, 8)) #create subplots (2 rows, 3 columns = 6 graphs) and size of it
capprops = {'color': 'magenta', 'linestyle': '-'} # style of caps
axes[0, 0].boxplot('MUD', data = df) # 1st graph - Basic box plot
axes[0, 0].set_title('Default') #title of the 1st box plot
axes[0, 1].boxplot('MUD', data = df, notch = True) # 2nd graph - Notched box plot
axes[0, 1].set_title('Notched') #title of the 2nd box plot
axes[0, 2].boxplot('MUD', data = df, sym = '') # 3rd graph - Box plot with no fliers
axes[0, 2].set_title('No outliers') #title of the 3rd box plot
axes[1, 0].boxplot('MUD', data = df, patch_artist=True, capprops = capprops) # 4th graph -Coloured box (blue) and caps (magenta)
axes[1, 0].set_title('Coloured box and caps') #title of the 4th box plot
axes[1, 1].boxplot('MUD', data = df, showmeans = True) # 5th graph - Box plot that shows the mean as a green triangle
axes[1, 1].set_title('Box with mean') #title of the 5th box plot
axes[1, 2].boxplot('MUD', data = df, showmeans = True, meanline = True) # 6th graph - Box plot with mean rendered as a line
axes[1, 2].set_title('Mean rendered as a line') #title of the 6th box plot
fig.suptitle("Different styles of box plot of the same data") #title of the whole set of box plots
plt.subplots_adjust(wspace=0.4,hspace=0.4) # adjust the space between plots
plt.show()
###Output
_____no_output_____
###Markdown
| Number summary || --- | --- || Mean | 17.26|| Q1 / 25th percentile | 0.00|| Q2 / 50th percentile / Median| 3.92|| Q3 / 75th percentile | 25.73|| IQR (Q3 - Q1)| 25.73|| Max (excl outlier = Q3 + 1.5 * IQR) | 64.32| * There is no lower whisker.* Examples that contain the mean help to identify the skeweness of the data.* Data skewed to left with a long tail to right and many outliers (greater than 64.32).* The example with no outlier the IQR is affected as it showed a bigger range. Check for how many samples have outlier value for the variable MUD
###Code
MUD_outliers = df [df.MUD >64.32]
MUD_outliers
#From a sample of 1653, 141 of them have outlier value for the variable MUD (it represents 8.53% of the sample).
###Output
_____no_output_____
###Markdown
Alternatives to box plotA box plot is a good statistical graph, however it shows only a simple summary of the distribution of a data. So, it is not the best method for detailed analysis. As box plot can hide some details of a distribution and even showing if a data set is symmetric, it cannot tell the shape of the symmetry. Therefore, to examine those details it should be used in combination with another statistical graph method, such as a histogram and/or a stem and leaf display [1, 3, 6].Sometimes a histogram is preferable over a box plot, in case where there is so little variance among the observed frequencies.
###Code
f, axes = plt.subplots(2, 3, figsize=(15,8)) #subplot (2 rows, 3 columns = 6 graphs) and the size of it
#box plot and histogram created using matplotlib.pyplot function
axes [0,0]. boxplot('MUD', vert = False, data=df) # create 1st graph - box plot - variable MUD
axes [0,0].set_title('Mud Box plot') #title of the box plot create
axes [0,1]. boxplot('SAND', vert = False, data=df) # create 2nd graph - box plot - variable SAND
axes [0,1].set_title('Sand Box plot') # title of the box plot created
axes [0,2].boxplot('GRAVEL', vert = False, data=df) # create 3rd graph - box plot - variable GRAVEL
axes [0,2].set_title('Gravel Box plot') # title of the box plot created
axes [1,0].hist('MUD', data = df) #create 4th graph - histogram - variable MUD
axes [1,0].set_title('Mud Histogram') #title of the histogram created
axes [1,1].hist('SAND', data = df) # create 5th graph - histogram - variable SAND
axes [1,1].set_title('Sand Histogram') #title of the histogram created
axes [1,2].hist('GRAVEL', data=df) # create 6th graph - histogram - variable GRAVEL
axes [1,2].set_title('Gravel Histogram') # title of the histogram created
plt.show()
###Output
_____no_output_____
|
notebooks/ch05_01_bank.ipynb
|
###Markdown
5.1 영업 성공 예측(분류) 공통 전처리
###Code
# 공통 처리
# 불필요한 경고 메시지 무시
import warnings
warnings.filterwarnings('ignore')
# 라이브러리 임포트
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# 한글 글꼴 설정
import platform
if platform.system() == 'Windows':
plt.rc('font', family='Malgun Gothic')
elif platform.system() == 'Darwin':
plt.rc('font', family='Apple Gothic')
# 데이터프레임 출력용 함수
from IPython.display import display
# 숫자 출력 조정
# 넘파이 부동소수점 출력 자리수 설정
np.set_printoptions(suppress=True, precision=4)
# 판다스 부동소수점 출력 자리수 설정
pd.options.display.float_format = '{:.4f}'.format
# 데이터프레임 모든 필드 출력
pd.set_option("display.max_columns",None)
# 그래프 글꼴 크기 설정
plt.rcParams["font.size"] = 14
# 난수 시드
random_seed = 123
# 혼동행렬 출력용 함수
def make_cm(matrix, columns):
# matrix : 넘파이 배열
# columns : 필드명 리스트
n = len(columns)
# '정답 데이터'를 n번 반복해 연접한 리스트
act = ['정답데이터'] * n
pred = ['예측결과'] * n
# 데이터프레임 생성
cm = pd.DataFrame(matrix,
columns=[pred, columns], index=[act, columns])
return cm
###Output
_____no_output_____
###Markdown
5.1.4 데이터 읽어 들이기부터 데이터 확인까지 데이터 읽어 들이기
###Code
# 데이터 집합을 내려받아 압축 해제
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank.zip -O bank.zip | tail -n 1
!unzip -o bank.zip | tail -n 1
# 역주: 위 명령에서 오류가 날 경우 URL의 파일을 직접 내려받아 notebooks 디렉토리에
# 압축을 해제하면 정상 진행할 수 있습니다.
# bank-full.csv 파일을 데이터프레임으로 읽어 들이기
df_all = pd.read_csv('bank-full.csv', sep=';')
# 우리말 필드명을 정의
columns = [
'연령', '직업', '혼인_여부', '학력', '채무불이행', '평균잔고',
'주택대출', '신용대출', '연락수단', '마지막통화일',
'마지막통화월', '마지막통화시간', '통화횟수_캠페인중',
'마지막영업후_경과일수', '통화횟수_캠페인전', '지난영업_결과',
'이번영업_결과'
]
df_all.columns = columns
###Output
_____no_output_____
###Markdown
데이터 확인
###Code
# 데이터프레임 내용 확인
display(df_all.head())
# 데이터 건수와 필드 수 확인
print(df_all.shape)
print()
# '이번영업_결과' 필드의 값 분포 확인
print(df_all['이번영업_결과'].value_counts())
print()
# 영업 성공률
rate = df_all['이번영업_결과'].value_counts()['yes']/len(df_all)
print(f'영업 성공률: {rate:.4f}')
# 누락값 확인
print(df_all.isnull().sum())
###Output
_____no_output_____
###Markdown
5.1.5 데이터 전처리 및 데이터 분할 데이터 전처리 전처리 1단계
###Code
# get_dummies 함수를 사용해 범주 값에 원-핫 인코딩 적용
# 필드에 원-핫 인코딩을 적용하는 함수
def enc(df, column):
df_dummy = pd.get_dummies(df[column], prefix=column)
df = pd.concat([df.drop([column],axis=1),df_dummy],axis=1)
return df
df_all2 = df_all.copy()
df_all2 = enc(df_all2, '직업')
df_all2 = enc(df_all2, '혼인_여부')
df_all2 = enc(df_all2, '학력')
df_all2 = enc(df_all2, '연락수단')
df_all2 = enc(df_all2, '지난영업_결과')
# 결과 확인
display(df_all2.head())
###Output
_____no_output_____
###Markdown
전처리 2단계
###Code
# yes/no를 1과 0으로 변환
# 이진 레이블값(yes/no)를 정수(1/0)으로 변환하는 함수
def enc_bin(df, column):
df[column] = df[column].map(dict(yes=1, no=0))
return df
df_all2 = enc_bin(df_all2, '채무불이행')
df_all2 = enc_bin(df_all2, '주택대출')
df_all2 = enc_bin(df_all2, '신용대출')
df_all2 = enc_bin(df_all2, '이번영업_결과')
# 결과 확인
display(df_all2.head())
###Output
_____no_output_____
###Markdown
전처리 3단계
###Code
# 달 이름(jan, feb ..)을 숫자(1, 2 ..)로 변환
month_dict = dict(jan=1, feb=2, mar=3, apr=4,
may=5, jun=6, jul=7, aug=8, sep=9, oct=10,
nov=11, dec=12)
def enc_month(df, column):
df[column] = df[column].map(month_dict)
return df
df_all2 = enc_month(df_all2, '마지막통화월')
# 결과 확인
display(df_all2.head())
###Output
_____no_output_____
###Markdown
데이터 분할
###Code
# 입력 데이터와 정답 데이터를 나누기
x = df_all2.drop('이번영업_결과', axis=1)
y = df_all2['이번영업_결과'].values
# 학습 데이터와 검증 데이터를 나누기
# 학습 데이터 60%, 검증 데이터 40%의 비율이 되도록 분할
test_size = 0.4
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=test_size, random_state=random_seed,
stratify=y)
###Output
_____no_output_____
###Markdown
5.1.6 알고리즘 선택하기 알고리즘 선택
###Code
# 후보 알고리즘 리스트 만들기
# 로지스틱 회귀 (4.3.3)
from sklearn.linear_model import LogisticRegression
algorithm1 = LogisticRegression(random_state=random_seed)
# 결정 트리 (4.3.6)
from sklearn.tree import DecisionTreeClassifier
algorithm2 = DecisionTreeClassifier(random_state=random_seed)
# 랜덤 포레스트 (4.3.7)
from sklearn.ensemble import RandomForestClassifier
algorithm3 = RandomForestClassifier(random_state=random_seed)
# XGBoost (4.3.8)
from xgboost import XGBClassifier
algorithm4 = XGBClassifier(random_state=random_seed)
algorithms = [algorithm1, algorithm2, algorithm3, algorithm4]
# 교차검증법을 적용해 최적의 알고리즘을 선정한다
from sklearn.model_selection import StratifiedKFold
stratifiedkfold = StratifiedKFold(n_splits=3)
from sklearn.model_selection import cross_val_score
for algorithm in algorithms:
# 교차검증법 적용
scores = cross_val_score(algorithm , x_train, y_train,
cv=stratifiedkfold, scoring='roc_auc')
score = scores.mean()
name = algorithm.__class__.__name__
print(f'평균 정확도: {score:.4f} 개별 정확도: {scores} {name}')
###Output
_____no_output_____
###Markdown
결론XGBoost가 네 가지 알고리즘 중 가장 높은 성능을 보였다. -> 지금부터는 XGBoost를 사용한다. 5.1.7 학습, 예측, 평가 단계
###Code
# 알고리즘 선택 (XGBoost)
algorithm = XGBClassifier(random_state=random_seed)
# 학습
algorithm.fit(x_train, y_train)
# 예측
y_pred = algorithm.predict(x_test)
# 평가
# 혼동행렬 출력
from sklearn.metrics import confusion_matrix
df_matrix = make_cm(
confusion_matrix(y_test, y_pred), ['실패', '성공'])
display(df_matrix)
# 정확률, 재현율, F-점수 계산하기
from sklearn.metrics import precision_recall_fscore_support
precision, recall, fscore, _ = precision_recall_fscore_support(
y_test, y_pred, average='binary')
print(f'정밀도: {precision:.4f} 재현율: {recall:.4f} F-점수: {fscore:.4f}')
###Output
_____no_output_____
###Markdown
5.1.8 튜닝 확률값의 도수분포 그래프
###Code
# 확률값의 도수분포 그래프
import seaborn as sns
# y=0인 데이터의 확률값 구하기
y_proba0 = algorithm.predict_proba(x_test)[:,1]
# y_test=0과 y_test=1로 데이터를 분할
y0 = y_proba0[y_test==0]
y1 = y_proba0[y_test==1]
# 산포도 그리기
plt.figure(figsize=(6,6))
plt.title('확률값의 도수분포')
sns.distplot(y1, kde=False, norm_hist=True,
bins=50, color='b', label='성공')
sns.distplot(y0, kde=False, norm_hist=True,
bins=50, color='k', label='실패')
plt.xlabel('확률값')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
predict_proba 함수를 사용해 0.5 외의 값을 역치로 설정해 예측을 수행(4.4절 참조)
###Code
# 설정한 역치 값에 대해 예측을 수행하는 함수
def pred(algorithm, x, thres):
# 확률값 꺼내기 (행렬)
y_proba = algorithm.predict_proba(x)
# 예측결과 1의 함숫값
y_proba1 = y_proba[:,1]
# 예측결과 1의 함숫값이 역치보다 큰가?
y_pred = (y_proba1 > thres).astype(int)
return y_pred
# 역치를 0.05씩 감소시켜가며 정확률, 재현율, F-점수를 계산한다
thres_list = np.arange(0.5, 0, -0.05)
for thres in thres_list:
y_pred = pred(algorithm, x_test, thres)
pred_sum = y_pred.sum()
precision, recall, fscore, _ = precision_recall_fscore_support(
y_test, y_pred, average='binary')
print(f'역치: {thres:.2f} 양성 예측 수: {pred_sum}\
정밀도: {precision:.4f} 재현율: {recall:.4f} F-점수: {fscore:.4f})')
# F-점수가 최대가 되는 역치는 0.30
y_final = pred(algorithm, x_test, 0.30)
# 혼동행렬을 출력
df_matrix2 = make_cm(
confusion_matrix(y_test, y_final), ['실패', '성공'])
display(df_matrix2)
# 정확률, 재현율, F-점수를 계산
precision, recall, fscore, _ = precision_recall_fscore_support(
y_test, y_final, average='binary')
print(f'정밀도: {precision:.4f} 재현율: {recall:.4f}\
F-점수: {fscore:.4f}')
###Output
_____no_output_____
###Markdown
5.1.9 중요도 분석
###Code
# 중요도 분석
# 중요도 벡터 계산
importances = algorithm.feature_importances_
# 필드명을 키로 Series 객체를 생성
w = pd.Series(importances, index=x.columns)
# 내림차순으로 정렬
u = w.sort_values(ascending=False)
# 상위 1개 항목을 추출
v = u[:10]
# 중요도의 막대그래프를 출력
plt.title('입력 필드의 중요도')
plt.bar(range(len(v)), v, color='b', align='center')
plt.xticks(range(len(v)), v.index, rotation=90)
plt.show()
column = '지난영업_결과_success'
sns.distplot(x_test[y_test==1][column], kde=False, norm_hist=True,
bins=5,color='b', label='성공')
sns.distplot(x_test[y_test==0][column], kde=False, norm_hist=True,
bins=5,color='k', label='실패')
plt.legend()
plt.show()
column = '마지막통화시간'
sns.distplot(x_test[y_test==1][column], kde=False, norm_hist=True,
bins=50, color='b', label='성공')
sns.distplot(x_test[y_test==0][column], kde=False, norm_hist=True,
bins=50, color='k', label='실패')
plt.legend()
plt.show()
column = '연락수단_unknown'
sns.distplot(x_test[y_test==1][column], kde=False, norm_hist=True,
bins=5,color='b', label='성공')
sns.distplot(x_test[y_test==0][column], kde=False, norm_hist=True,
bins=5,color='k', label='실패')
plt.legend()
plt.show()
column = '주택대출'
sns.distplot(x_test[y_test==1][column], kde=False, norm_hist=True,
bins=5,color='b', label='성공')
sns.distplot(x_test[y_test==0][column], kde=False, norm_hist=True,
bins=5,color='k', label='실패')
plt.legend()
plt.show()
column = '혼인_여부_single'
sns.distplot(x_test[y_test==1][column], kde=False, norm_hist=True,
bins=5,color='b', label='성공')
sns.distplot(x_test[y_test==0][column], kde=False, norm_hist=True,
bins=5,color='k', label='실패')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
5.1 営業成約予測(分類) 共通事前処理
###Code
# 日本語化ライブラリ導入
!pip install japanize-matplotlib | tail -n 1
# 共通事前処理
# 余分なワーニングを非表示にする
import warnings
warnings.filterwarnings('ignore')
# 必要ライブラリのimport
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# matplotlib日本語化対応
import japanize_matplotlib
# データフレーム表示用関数
from IPython.display import display
# 表示オプション調整
# numpyの浮動小数点の表示精度
np.set_printoptions(suppress=True, precision=4)
# pandasでの浮動小数点の表示精度
pd.options.display.float_format = '{:.4f}'.format
# データフレームですべての項目を表示
pd.set_option("display.max_columns",None)
# グラフのデフォルトフォント指定
plt.rcParams["font.size"] = 14
# 乱数の種
random_seed = 123
# 混同行列表示用関数
def make_cm(matrix, columns):
# matrix numpy配列
# columns 項目名リスト
n = len(columns)
# '正解データ'をn回繰り返すリスト生成
act = ['正解データ'] * n
pred = ['予測結果'] * n
#データフレーム生成
cm = pd.DataFrame(matrix,
columns=[pred, columns], index=[act, columns])
return cm
###Output
_____no_output_____
###Markdown
5.1.4 データ読み込みからデータ確認まで データ読み込み
###Code
# 公開データのダウンロードと解凍
!wget https://archive.ics.uci.edu/ml/\
machine-learning-databases/00222/bank.zip
!unzip -o bank.zip
# bank-full.csvをデータフレームに取り込み
df_all = pd.read_csv('bank-full.csv', sep=';')
# 項目名を日本語に置き換える
columns = [
'年齢', '職業', '婚姻', '学歴', '債務不履行', '平均残高',
'住宅ローン', '個人ローン', '連絡手段', '最終通話日',
'最終通話月', '最終通話秒数', '通話回数_販促中',
'前回販促後_経過日数', '通話回数_販促前', '前回販促結果',
'今回販促結果'
]
df_all.columns = columns
###Output
_____no_output_____
###Markdown
データ確認
###Code
# データフレームの内容確認
display(df_all.head())
# 学習データの件数と項目数確認
print(df_all.shape)
print()
# 「今回販促結果」の値の分布確認
print(df_all['今回販促結果'].value_counts())
print()
# 営業成功率
rate = df_all['今回販促結果'].value_counts()['yes']/len(df_all)
print(f'営業成功率: {rate:.4f}')
# 欠損値の確認
print(df_all.isnull().sum())
###Output
_____no_output_____
###Markdown
5.1.5 データ前処理とデータ分割 データ前処理 前処理 step 1
###Code
# get_dummies関数でカテゴリ値をOne-Hotエンコーディング
# 項目をOne-Hotエンコーディングするための関数
def enc(df, column):
df_dummy = pd.get_dummies(df[column], prefix=column)
df = pd.concat([df.drop([column],axis=1),df_dummy],axis=1)
return df
df_all2 = df_all.copy()
df_all2 = enc(df_all2, '職業')
df_all2 = enc(df_all2, '婚姻')
df_all2 = enc(df_all2, '学歴')
df_all2 = enc(df_all2, '連絡手段')
df_all2 = enc(df_all2, '前回販促結果')
# 結果確認
display(df_all2.head())
###Output
_____no_output_____
###Markdown
前処理 step2
###Code
# yes/noを1/0に置換
# 2値 (yes/no)の値を(1/0)に置換する関数
def enc_bin(df, column):
df[column] = df[column].map(dict(yes=1, no=0))
return df
df_all2 = enc_bin(df_all2, '債務不履行')
df_all2 = enc_bin(df_all2, '住宅ローン')
df_all2 = enc_bin(df_all2, '個人ローン')
df_all2 = enc_bin(df_all2, '今回販促結果')
# 結果確認
display(df_all2.head())
###Output
_____no_output_____
###Markdown
前処理 step3
###Code
# 月名(jan, feb,..)を1,2.. に置換
month_dict = dict(jan=1, feb=2, mar=3, apr=4,
may=5, jun=6, jul=7, aug=8, sep=9, oct=10,
nov=11, dec=12)
def enc_month(df, column):
df[column] = df[column].map(month_dict)
return df
df_all2 = enc_month(df_all2, '最終通話月')
# 結果確認
display(df_all2.head())
###Output
_____no_output_____
###Markdown
データ分割
###Code
# 入力データと正解データの分割
x = df_all2.drop('今回販促結果', axis=1)
y = df_all2['今回販促結果'].values
# 訓練データと検証データの分割
# 訓練データ60% 検証データ40%の比率で分割する
test_size = 0.4
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=test_size, random_state=random_seed,
stratify=y)
###Output
_____no_output_____
###Markdown
5.1.6 アルゴリズム選定 アルゴリズム選定
###Code
# 候補アルゴリズムのリスト化
# ロジスティック回帰 (4.3.3)
from sklearn.linear_model import LogisticRegression
algorithm1 = LogisticRegression(random_state=random_seed)
# 決定木 (4.3.6)
from sklearn.tree import DecisionTreeClassifier
algorithm2 = DecisionTreeClassifier(random_state=random_seed)
# ランダムフォレスト (4.3.7)
from sklearn.ensemble import RandomForestClassifier
algorithm3 = RandomForestClassifier(random_state=random_seed)
# XGBoost (4.3.8)
from xgboost import XGBClassifier
algorithm4 = XGBClassifier(random_state=random_seed)
algorithms = [algorithm1, algorithm2, algorithm3, algorithm4]
# 交差検定法を用いて最適なアルゴリズムの選定
from sklearn.model_selection import StratifiedKFold
stratifiedkfold = StratifiedKFold(n_splits=3)
from sklearn.model_selection import cross_val_score
for algorithm in algorithms:
# 交差検定法の実行
scores = cross_val_score(algorithm , x_train, y_train,
cv=stratifiedkfold, scoring='roc_auc')
score = scores.mean()
name = algorithm.__class__.__name__
print(f'平均スコア: {score:.4f} 個別スコア: {scores} {name}')
###Output
_____no_output_____
###Markdown
結論XGBboostが4つの候補の中で最も精度が高い -> 以降はXGBoostを利用する 5.1.7 学習・予測・評価
###Code
# アルゴリズム選定
# XGBoostを利用
algorithm = XGBClassifier(random_state=random_seed)
# 学習
algorithm.fit(x_train, y_train)
# 予測
y_pred = algorithm.predict(x_test)
# 評価
# 混同行列を出力
from sklearn.metrics import confusion_matrix
df_matrix = make_cm(
confusion_matrix(y_test, y_pred), ['失敗', '成功'])
display(df_matrix)
# 適合率, 再現率, F値を計算
from sklearn.metrics import precision_recall_fscore_support
precision, recall, fscore, _ = precision_recall_fscore_support(
y_test, y_pred, average='binary')
print(f'適合率: {precision:.4f} 再現率: {recall:.4f} F値: {fscore:.4f}')
###Output
_____no_output_____
###Markdown
5.1.8 チューニング 確率値の度数分布グラフ
###Code
# 確率値の度数分布グラフ
import seaborn as sns
# y=0の確率値取得
y_proba0 = algorithm.predict_proba(x_test)[:,1]
# y_test=0 と y_test=1 でデータ分割
y0 = y_proba0[y_test==0]
y1 = y_proba0[y_test==1]
# 散布図描画
plt.figure(figsize=(6,6))
plt.title('確率値の度数分布')
sns.distplot(y1, kde=False, norm_hist=True,
bins=50, color='b', label='成功')
sns.distplot(y0, kde=False, norm_hist=True,
bins=50, color='k', label='失敗')
plt.xlabel('確率値')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
predict_proba関数を利用して、閾値0.5以外の場合の予測をする(4.4節参照)
###Code
# 閾値を変更した場合の予測関数の定義
def pred(algorithm, x, thres):
# 確率値の取得(行列)
y_proba = algorithm.predict_proba(x)
# 予測結果1の確率値
y_proba1 = y_proba[:,1]
# 予測結果1の確率値 > 閾値
y_pred = (y_proba1 > thres).astype(int)
return y_pred
# 閾値を0.05刻みに変化させて、適合率, 再現率, F値を計算する
thres_list = np.arange(0.5, 0, -0.05)
for thres in thres_list:
y_pred = pred(algorithm, x_test, thres)
pred_sum = y_pred.sum()
precision, recall, fscore, _ = precision_recall_fscore_support(
y_test, y_pred, average='binary')
print(f'閾値: {thres:.2f} 陽性予測数: {pred_sum}\
適合率: {precision:.4f} 再現率: {recall:.4f} F値: {fscore:.4f})')
# F値を最大にする閾値は0.30
y_final = pred(algorithm, x_test, 0.30)
# 混同行列を出力
df_matrix2 = make_cm(
confusion_matrix(y_test, y_final), ['失敗', '成功'])
display(df_matrix2)
# 適合率, 再現率, F値を計算
precision, recall, fscore, _ = precision_recall_fscore_support(
y_test, y_final, average='binary')
print(f'適合率: {precision:.4f} 再現率: {recall:.4f}\
F値: {fscore:.4f}')
###Output
_____no_output_____
###Markdown
5.1.9 重要度分析
###Code
# 重要度分析
# 重要度ベクトルの取得
importances = algorithm.feature_importances_
# 項目名をキーにSeriesを生成
w = pd.Series(importances, index=x.columns)
# 値の大きい順にソート
u = w.sort_values(ascending=False)
# top10のみ抽出
v = u[:10]
# 重要度の棒グラフ表示
plt.title('入力項目の重要度')
plt.bar(range(len(v)), v, color='b', align='center')
plt.xticks(range(len(v)), v.index, rotation=90)
plt.show()
column = '前回販促結果_success'
sns.distplot(x_test[y_test==1][column], kde=False, norm_hist=True,
bins=5,color='b', label='成功')
sns.distplot(x_test[y_test==0][column], kde=False, norm_hist=True,
bins=5,color='k', label='失敗')
plt.legend()
plt.show()
column = '最終通話秒数'
sns.distplot(x_test[y_test==1][column], kde=False, norm_hist=True,
bins=50, color='b', label='成功')
sns.distplot(x_test[y_test==0][column], kde=False, norm_hist=True,
bins=50, color='k', label='失敗')
plt.legend()
plt.show()
column = '連絡手段_unknown'
sns.distplot(x_test[y_test==1][column], kde=False, norm_hist=True,
bins=5,color='b', label='成功')
sns.distplot(x_test[y_test==0][column], kde=False, norm_hist=True,
bins=5,color='k', label='失敗')
plt.legend()
plt.show()
column = '住宅ローン'
sns.distplot(x_test[y_test==1][column], kde=False, norm_hist=True,
bins=5,color='b', label='成功')
sns.distplot(x_test[y_test==0][column], kde=False, norm_hist=True,
bins=5,color='k', label='失敗')
plt.legend()
plt.show()
column = '婚姻_single'
sns.distplot(x_test[y_test==1][column], kde=False, norm_hist=True,
bins=5,color='b', label='成功')
sns.distplot(x_test[y_test==0][column], kde=False, norm_hist=True,
bins=5,color='k', label='失敗')
plt.legend()
plt.show()
###Output
_____no_output_____
|
ex5 Normal distribution.ipynb
|
###Markdown
Exercise 5 - parametric model normal distribution
###Code
import numpy as np
import matplotlib.pyplot as plt
import numpy.random as rnd
import scipy.stats as sts
import pandas as pd
def mle_norm(data):
muhat = np.mean(data)
sigma2hat = np.var(data)
return muhat,sigma2hat
def confidence_interval_direct(data,tau,alpha):
muhat,sigma2hat = mle_norm(data)
Qhat = sts.norm.ppf(tau,muhat,scale=np.sqrt(sigma2hat))
n = len(data)
quantile_normal_distribution = sts.norm.ppf(1 - 0.5*alpha)
QL = Qhat - quantile_normal_distribution*np.sqrt(sigma2hat)/np.sqrt(n)
QU = Qhat + quantile_normal_distribution*np.sqrt(sigma2hat)/np.sqrt(n)
return (QL , QU , Qhat)
def exercise5():
cost = 2.95
price = 3.27
holding_cost = 0.07
alpha = 0.05
#change costs for different stores
fn = 'BakeryData.xlsx'
df = pd.read_excel(fn)
tau = (price-cost) / (price + holding_cost)
#change 0 in [] to 0-6 for different days
QL,QU,Qhat = confidence_interval_direct(df.iloc[5:1004:7,2],tau,alpha)
print('direct confidence interval for optimal order Q:')
print('(',QL, ',', QU,')')
print('optimal order quantity:',Qhat)
print('finished')
exercise5()
###Output
direct confidence interval for optimal order Q:
( 205.7687837275816 , 208.87842427646046 )
optimal order quantity: 207.32360400202103
klaar
|
_notebooks/2022-04-30-Tabular-Playground-Series-April-2022.ipynb
|
###Markdown
Tabular Playground Series April 2022> "Random Forest Approach to TPS April 2022"- toc: true- branch: master- badges: true- comments: true- categories: [kaggle, rf, jupyter, tps]- hide: false
###Code
# Required modules
import tqdm
import numpy as np
import pandas as pd
import seaborn as sns
from zipfile import ZipFile
from matplotlib import pyplot as plt
from sklearn.ensemble import RandomForestClassifier
# Config
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 12)
###Output
_____no_output_____
###Markdown
Before running the below cell, upload your kaggle token, to make sure an error doesn't popup.
###Code
# Create kaggle folder
!mkdir ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
# Test the command
!kaggle competitions download -c tabular-playground-series-apr-2022
# Extract the zip file
with ZipFile('/content/tabular-playground-series-apr-2022.zip', 'r') as zf:
zf.extractall('./')
###Output
_____no_output_____
###Markdown
Loading the data
###Code
# Load the data
train = pd.read_csv('train.csv')
train.head()
# Inspecting the data
train.info()
train.describe()
# Load the train labels
labels = pd.read_csv('train_labels.csv')
labels.head()
# Correlation matrix
sns.heatmap(train.corr(), annot=True, vmin=-1, vmax=1, cmap='RdYlGn')
# Load the data
test = pd.read_csv('test.csv')
test.head()
###Output
_____no_output_____
###Markdown
There are no missing values, in the data.
###Code
# Missing values
if train.isna().any().any():
print(train.isna().sum()/train.shape[0])
else:
print("No Missing values")
# Merge the features and labels for the training data
train_all = train.merge(labels, on='sequence')
###Output
_____no_output_____
###Markdown
Preparation
###Code
# Important functions
def time_series_train_test_split(X, y, split_col, target_col, train_size=0.8):
train_len = int(len(X[split_col].unique()) * train_size)
condition_mask = X[split_col].isin(list(range(train_len+1)))
X_train, y_train = X[condition_mask], y[condition_mask]
X_valid, y_valid = X[np.logical_not(condition_mask)], y[np.logical_not(condition_mask)]
return X_train, X_valid, y_train, y_valid
# Seperating out features and labels
X = train_all.drop(['state'], axis=1)
y = train_all['state']
# Train and valid split of the data
X_train, X_valid, y_train, y_valid = time_series_train_test_split(X, y, 'sequence', 'state')
###Output
_____no_output_____
###Markdown
Modelling Approach-1 I have used the classical machine learning models to fit the data and classify them into their states.
###Code
# Forests for time series
model = RandomForestClassifier(n_jobs=-1)
model.fit(X_train, y_train)
# Predictions
train_pred = model.predict(X_train.values)
valid_pred = model.predict(X_valid.values)
test_pred = model.predict(test.values)
# Evaluation
print(f"Train Accuarcy: {model.score(X_train, y_train)}")
print(f"Valid Accuarcy: {model.score(X_valid, y_valid)}")
test_all = pd.concat([test, pd.Series(test_pred)], axis=1)
test_all.head()
###Output
_____no_output_____
###Markdown
For each timestamp of a particular sequence the model will predict wheather it is in state 0 or 1.But the test results, should be in the format of sequence number and the
###Code
# Test Predictions
test_actual = test_all.groupby(['sequence']).agg({0: np.mean}).reset_index()
test_actual[0] = np.where(test_actual[0] < 0.5, 0, 1)
test_actual.head()
# Generating output file
submission = pd.read_csv('/content/sample_submission.csv')
submission.merge(test_actual, on='sequence').drop(['state'], axis=1).rename(columns={0: 'state'})
submission.to_csv('output.csv', index=False)
# Submission
!kaggle competitions submit -c tabular-playground-series-apr-2022 -f output.csv -m "RF 100"
###Output
100% 95.5k/95.5k [00:02<00:00, 44.6kB/s]
Successfully submitted to Tabular Playground Series - Apr 2022
|
Feature Calculation Example.ipynb
|
###Markdown
Features can be generated from the terminal using the following command:```python keplerml.py data/filelists/ex_filelist.txt data/lightcurves/ data/output/Example_output.p ```This requires a Python 3+ to be the default version, replace `python` with an appropriate version if this is not the case, for example, use `python3.7`The above terminal command is equivalent to the following:
###Code
path_to_filelist = './data/filelists/ex_filelist.txt'
path_to_fits = './data/lightcurves/'
output_file = 'data/output/Example_output.p'
features = fc.features_from_filelist(path_to_filelist,path_to_fits,output_file,fl_as_array=False,verbose=True,prime_feats=False)
###Output
Reading ./data/filelists/ex_filelist.txt...
Using 47 cpus to calculate features...
Importing 247 lightcurves...
Lightcurve import took 0:00:01.129264
Processing 247 files...
247/247 completed. Time for chunk: 0:01:06.296283
Features have been calculated, total time to calculate features: 0:01:06.308606
Saving output to data/output/Example_output.p
Cleaning up...
Done.
###Markdown
Features are returned in a Pandas DataFrame, and saved to the specified output file as a pickled dataframe, which can be read in using the pickle module
###Code
features.head()
import pickle
output_file = 'data/output/Example_output.p'
with open(output_file,'rb') as f:
feats = pickle.load(f)
feats.head()
###Output
_____no_output_____
###Markdown
The features are optimized using the `@njit` decorator from the `numba` package. To make full use of this, the code to be optimized by numba needs to run once. This can be done manually as follows:
###Code
lc_path = './data/lightcurves/kplr001026032-2011271113734_llc.fits'
lc = fc.import_lcs(lc_path)
t = lc[1]
nf = lc[2]
err = lc[3]
lc_feats = fc.feats(t,nf,err)
###Output
_____no_output_____
###Markdown
After priming the feature calculation, the features for a filelist can be run in the same way as before.As a note, the `features_from_filelist` method will run a primer by default using the first lightcurve. Whether primed manually as above, with the default, or even run specifically without priming, runs following the first will be optimized by `numba` and be quicker.Note the drastically improved runtime following the manual priming above:
###Code
path_to_filelist = './data/filelists/ex_filelist.txt'
path_to_fits = './data/lightcurves/'
output_file = 'data/output/Example_output.p'
features = fc.features_from_filelist(path_to_filelist,path_to_fits,output_file,fl_as_array=False,verbose=True,prime_feats=False)
###Output
Reading ./data/filelists/ex_filelist.txt...
Using 47 cpus to calculate features...
Importing 247 lightcurves...
Lightcurve import took 0:00:01.126179
Processing 247 files...
247/247 completed. Time for chunk: 0:00:12.281421
Features have been calculated, total time to calculate features: 0:00:12.294860
Saving output to data/output/Example_output.p
Cleaning up...
Done.
###Markdown
The file list can also be fed into the feature calculator as a list of filenames, produced however. Two examples below.
###Code
path_to_filelist = './data/filelists/ex_filelist.txt'
with open(path_to_filelist,'r') as f:
files = f.read().splitlines()
path_to_fits = './data/lightcurves/'
output_file = 'data/output/Example_output.p'
feats = fc.features_from_filelist(files,path_to_fits,output_file,fl_as_array=True,verbose=True)
import os
path_to_fits = './data/lightcurves/'
files = os.listdir('data/lightcurves')
output_file = 'data/output/Example_output.p'
feats = fc.features_from_filelist(files,path_to_fits,output_file,fl_as_array=True,verbose=True)
###Output
Using 47 cpus to calculate features...
Importing 247 lightcurves...
Lightcurve import took 0:00:01.874902
Processing 247 files...
247/247 completed. Time for chunk: 0:00:14.005037
Features have been calculated, total time to calculate features: 0:00:14.058610
Saving output to data/output/Example_output.p
Cleaning up...
Done.
###Markdown
Saving as a Cluster Outlier Object
###Code
import clusterOutliers as coo
"""
!!! DOES NOT WORK FOR THIS EXAMPLE DATA
!!! CLUSTER OUTLIER OBJECT IS DESIGNED TO WORK ON LARGE DATASETS, EXAMPLE IS TOO SMALL
"""
example_coo = coo.clusterOutliers(feats=feats,fitsDir=path_to_fits,output_file='example.coo')
###Output
_____no_output_____
###Markdown
Speed tests
###Code
from datetime import datetime
start = datetime.now()
lc_path = './data/lightcurves/kplr001026032-2011271113734_llc.fits'
lc = fc.import_lcs(lc_path)
t = lc[1]
nf = lc[2]
err = lc[3]
lc_feats = fc.feats(t,nf,err)
print("Time to prime: {}".format(datetime.now()-start))
%%timeit
lc_path = './data/lightcurves/kplr001026032-2011271113734_llc.fits'
lc = fc.import_lcs(lc_path)
t = lc[1]
nf = lc[2]
err = lc[3]
lc_feats = fc.feats(t,nf,err)
%%timeit
lc_feats = fc.feats(t,nf,err)
%%timeit
lc = fc.import_lcs(lc_path)
###Output
19.1 ms ± 1.24 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
|
.ipynb_checkpoints/LDA topic modeling-checkpoint.ipynb
|
###Markdown
Now we are doing all of these steps for the whole available text (data_list2 here):
###Code
texts = []
# loop through document list
for i in data_list2:
# clean and tokenize document string
raw = i.lower()
tokens = tokenizer.tokenize(raw)
# remove stop words from tokens
stopped_tokens = [i for i in tokens if not i in en_stop]
# stem tokens
stemmed_tokens = [p_stemmer.stem(i) for i in stopped_tokens]
# add tokens to list
texts.append(stemmed_tokens)
# turn our tokenized documents into a id <-> term dictionary
dictionary = corpora.Dictionary(texts)
# convert tokenized documents into a document-term matrix
corpus = [dictionary.doc2bow(text) for text in texts]
# generate LDA model
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=5, id2word = dictionary, passes=20)
print(ldamodel.print_topics(num_topics=5, num_words=10))
print(ldamodel)
type(ldamodel)
print(dictionary)
type(dictionary)
type(corpus)
len (corpus)
print(corpus)
###Output
[[(0, 1), (1, 1), (2, 1), (3, 1), (4, 3), (5, 1), (6, 1), (7, 2), (8, 1), (9, 1), (10, 1), (11, 1), (12, 1), (13, 1), (14, 1), (15, 1), (16, 1), (17, 1), (18, 1), (19, 1), (20, 1), (21, 1), (22, 1), (23, 1), (24, 1), (25, 1), (26, 1), (27, 1), (28, 1), (29, 1)], [(1, 1), (4, 7), (5, 2), (7, 5), (12, 1), (17, 2), (30, 3), (31, 1), (32, 2), (33, 1), (34, 2), (35, 1), (36, 3), (37, 1), (38, 1), (39, 1), (40, 1), (41, 1), (42, 1), (43, 1), (44, 1), (45, 1), (46, 1), (47, 1), (48, 1), (49, 1), (50, 1), (51, 1), (52, 1), (53, 1), (54, 1), (55, 1), (56, 2), (57, 1), (58, 1), (59, 1), (60, 1), (61, 1), (62, 1), (63, 2), (64, 1), (65, 1), (66, 1), (67, 1), (68, 1), (69, 1), (70, 1), (71, 2), (72, 1), (73, 1), (74, 1), (75, 1), (76, 1), (77, 1), (78, 1), (79, 1), (80, 1), (81, 1), (82, 1), (83, 1), (84, 1), (85, 1), (86, 1), (87, 1), (88, 1), (89, 1)], [(4, 1), (5, 1), (6, 2), (12, 1), (18, 1), (56, 1), (67, 1), (69, 1), (83, 1), (90, 2), (91, 2), (92, 1), (93, 1), (94, 1), (95, 1), (96, 1), (97, 1), (98, 1), (99, 1), (100, 1), (101, 1), (102, 1), (103, 1), (104, 2), (105, 1), (106, 2), (107, 1), (108, 1), (109, 1), (110, 1), (111, 1), (112, 1), (113, 1), (114, 1), (115, 1), (116, 1), (117, 1), (118, 1), (119, 1), (120, 1), (121, 1), (122, 1), (123, 1), (124, 1), (125, 1), (126, 1), (127, 1), (128, 1), (129, 1), (130, 1), (131, 1)], [(71, 1), (91, 1), (92, 1), (124, 1), (132, 1), (133, 1), (134, 1), (135, 1), (136, 1), (137, 1)], [(1, 1), (3, 1), (4, 1), (13, 1), (57, 1), (91, 2), (92, 1), (108, 1), (138, 1), (139, 1), (140, 1), (141, 1), (142, 1), (143, 1), (144, 1), (145, 1)], [(4, 2), (5, 1), (17, 2), (45, 1), (62, 1), (66, 1), (100, 1), (110, 1), (129, 1), (146, 1), (147, 1), (148, 1), (149, 1), (150, 1), (151, 1), (152, 1), (153, 1), (154, 1), (155, 1), (156, 1)], [(38, 1), (91, 2), (92, 2), (104, 1), (110, 2), (131, 1), (138, 1), (157, 1), (158, 1), (159, 1), (160, 1), (161, 1), (162, 1), (163, 1), (164, 1), (165, 1)], [(4, 4), (5, 7), (7, 3), (9, 2), (17, 1), (34, 1), (35, 1), (43, 1), (50, 1), (56, 1), (63, 1), (71, 1), (72, 1), (82, 1), (91, 2), (92, 1), (97, 1), (100, 3), (104, 2), (108, 1), (110, 1), (112, 1), (114, 1), (130, 1), (131, 1), (147, 1), (148, 1), (153, 1), (154, 1), (158, 1), (159, 1), (166, 1), (167, 2), (168, 2), (169, 1), (170, 1), (171, 1), (172, 1), (173, 1), (174, 1), (175, 1), (176, 1), (177, 1), (178, 2), (179, 1), (180, 1), (181, 1), (182, 2), (183, 1), (184, 1), (185, 1), (186, 1), (187, 2), (188, 1), (189, 1), (190, 1), (191, 1), (192, 1), (193, 1), (194, 1), (195, 1), (196, 1), (197, 1), (198, 1), (199, 1), (200, 1)], [(4, 1), (19, 1), (201, 1), (202, 1), (203, 1), (204, 1), (205, 1), (206, 1), (207, 1), (208, 1), (209, 1)], [(1, 1), (4, 1), (5, 2), (66, 1), (90, 1), (91, 3), (92, 2), (100, 3), (104, 2), (108, 1), (112, 1), (138, 1), (141, 1), (170, 1), (208, 1), (210, 1), (211, 1), (212, 1), (213, 1), (214, 1), (215, 1), (216, 1), (217, 1), (218, 1)], [(4, 1), (5, 1), (80, 1), (91, 2), (100, 1), (108, 2), (121, 1), (145, 1), (169, 1), (219, 1), (220, 1), (221, 1), (222, 1), (223, 1), (224, 1), (225, 1), (226, 1), (227, 1), (228, 1), (229, 1), (230, 1)], [(1, 1), (16, 1), (130, 1), (141, 1), (169, 1), (197, 1), (214, 1), (231, 1), (232, 1), (233, 1), (234, 1), (235, 1), (236, 1), (237, 1)], [(4, 1), (63, 1), (73, 1), (106, 1), (108, 1), (238, 1), (239, 1), (240, 1), (241, 1)], [(1, 1), (4, 2), (7, 2), (17, 1), (30, 1), (32, 1), (35, 1), (40, 1), (47, 1), (50, 1), (51, 1), (90, 1), (91, 1), (92, 1), (100, 2), (104, 1), (110, 1), (123, 2), (129, 1), (141, 2), (154, 1), (159, 1), (177, 1), (200, 1), (201, 1), (212, 1), (214, 1), (231, 4), (232, 2), (233, 1), (242, 1), (243, 2), (244, 1), (245, 1), (246, 1), (247, 1), (248, 1), (249, 1), (250, 1), (251, 1), (252, 2), (253, 1), (254, 4), (255, 1), (256, 1), (257, 1), (258, 1), (259, 2), (260, 1), (261, 2), (262, 1), (263, 1), (264, 1), (265, 1), (266, 1), (267, 1), (268, 1), (269, 1), (270, 1), (271, 2), (272, 2), (273, 1), (274, 1), (275, 2), (276, 1), (277, 1), (278, 1), (279, 1), (280, 1), (281, 1), (282, 1), (283, 1), (284, 1), (285, 1), (286, 1), (287, 1), (288, 1), (289, 1), (290, 1), (291, 1), (292, 1), (293, 1), (294, 1), (295, 1), (296, 1), (297, 1), (298, 1), (299, 1), (300, 1), (301, 1), (302, 1), (303, 1), (304, 1), (305, 1), (306, 1), (307, 1), (308, 1), (309, 1)], [(2, 1), (5, 1), (43, 1), (52, 1), (102, 1), (124, 1), (149, 1), (157, 1), (159, 1), (167, 1), (200, 2), (225, 1), (277, 1), (310, 1), (311, 1), (312, 1), (313, 1), (314, 1), (315, 1), (316, 1), (317, 1), (318, 1), (319, 1), (320, 1)], [(90, 1), (131, 1), (155, 1), (158, 1), (167, 1), (182, 1), (224, 1), (261, 1), (321, 1), (322, 1), (323, 1), (324, 1), (325, 1), (326, 1), (327, 1), (328, 1)], [(1, 1), (4, 1), (34, 1), (62, 1), (72, 1), (84, 1), (91, 1), (92, 1), (97, 1), (100, 2), (104, 1), (121, 1), (130, 1), (139, 1), (154, 1), (157, 1), (214, 1), (225, 1), (254, 1), (300, 1), (310, 1), (329, 1), (330, 1), (331, 1), (332, 1), (333, 1), (334, 1)], [(4, 1), (69, 1), (90, 2), (121, 1), (156, 1), (216, 1), (238, 2), (254, 1), (280, 1), (325, 1), (326, 1), (335, 1), (336, 1), (337, 1), (338, 1), (339, 1), (340, 1), (341, 1), (342, 1), (343, 2), (344, 1), (345, 1), (346, 1), (347, 1), (348, 1)], [(4, 1), (17, 2), (26, 1), (35, 1), (37, 1), (40, 1), (75, 1), (84, 2), (86, 1), (101, 1), (102, 2), (106, 1), (111, 1), (117, 1), (139, 1), (141, 1), (164, 1), (167, 1), (169, 1), (171, 2), (200, 1), (225, 1), (231, 1), (239, 1), (254, 1), (257, 1), (278, 1), (279, 2), (292, 1), (321, 1), (335, 1), (341, 1), (349, 1), (350, 1), (351, 1), (352, 1), (353, 1), (354, 1), (355, 1), (356, 2), (357, 1), (358, 2), (359, 1), (360, 1), (361, 1), (362, 1), (363, 1), (364, 1), (365, 1), (366, 1), (367, 1), (368, 1), (369, 1), (370, 1), (371, 1), (372, 1), (373, 1), (374, 1), (375, 1), (376, 1), (377, 1)], [(1, 4), (2, 2), (4, 6), (5, 2), (7, 2), (17, 4), (18, 1), (25, 1), (26, 1), (35, 3), (36, 1), (49, 1), (50, 2), (51, 1), (52, 1), (63, 1), (74, 1), (82, 1), (84, 1), (91, 4), (92, 2), (100, 1), (101, 2), (102, 2), (104, 2), (108, 1), (109, 1), (111, 1), (123, 1), (124, 1), (129, 1), (133, 1), (146, 2), (147, 3), (149, 1), (159, 2), (162, 1), (169, 1), (171, 1), (178, 1), (181, 1), (185, 1), (187, 1), (193, 1), (196, 1), (200, 1), (207, 1), (214, 2), (223, 1), (230, 1), (231, 1), (232, 1), (235, 1), (252, 2), (254, 3), (261, 1), (271, 1), (272, 1), (295, 1), (312, 1), (322, 1), (346, 3), (353, 1), (356, 1), (370, 1), (374, 2), (378, 2), (379, 1), (380, 1), (381, 1), (382, 1), (383, 1), (384, 5), (385, 3), (386, 1), (387, 1), (388, 1), (389, 1), (390, 2), (391, 1), (392, 1), (393, 2), (394, 1), (395, 1), (396, 1), (397, 2), (398, 1), (399, 1), (400, 1), (401, 1), (402, 1), (403, 2), (404, 1), (405, 2), (406, 1), (407, 1), (408, 1), (409, 1), (410, 2), (411, 1), (412, 1), (413, 1), (414, 1), (415, 1), (416, 2), (417, 1), (418, 1), (419, 1), (420, 1), (421, 1), (422, 1), (423, 1), (424, 1), (425, 1), (426, 1), (427, 1), (428, 1), (429, 1), (430, 1), (431, 1), (432, 1), (433, 1), (434, 1), (435, 1), (436, 2), (437, 1), (438, 1), (439, 1), (440, 2), (441, 1), (442, 2), (443, 1), (444, 1), (445, 1), (446, 1), (447, 1), (448, 4), (449, 1), (450, 1), (451, 1), (452, 1), (453, 1), (454, 1), (455, 1), (456, 2), (457, 1), (458, 1), (459, 1), (460, 2), (461, 1), (462, 1), (463, 1), (464, 1), (465, 1), (466, 1)], [(1, 2), (2, 1), (3, 1), (4, 4), (5, 1), (35, 2), (53, 1), (83, 1), (91, 1), (104, 1), (108, 1), (130, 1), (142, 1), (147, 1), (153, 1), (211, 1), (214, 1), (216, 1), (226, 1), (235, 1), (243, 1), (272, 1), (325, 1), (387, 1), (453, 1), (467, 1), (468, 1), (469, 1), (470, 1), (471, 1), (472, 1), (473, 1), (474, 1), (475, 1), (476, 1), (477, 1)], [(2, 1), (3, 2), (4, 2), (17, 1), (57, 1), (159, 1), (225, 1), (235, 1), (386, 1), (478, 1), (479, 1), (480, 1)], [(2, 1), (4, 2), (16, 2), (17, 2), (52, 1), (57, 1), (72, 1), (91, 3), (92, 1), (100, 2), (146, 1), (149, 1), (164, 1), (185, 1), (206, 1), (213, 1), (235, 1), (262, 1), (264, 1), (273, 1), (334, 2), (387, 2), (409, 1), (416, 2), (434, 1), (469, 1), (481, 1), (482, 1), (483, 1), (484, 1), (485, 1), (486, 1), (487, 1), (488, 1), (489, 1), (490, 1), (491, 1), (492, 1)], [(4, 2), (5, 1), (12, 1), (16, 1), (35, 1), (49, 2), (50, 1), (51, 1), (63, 1), (65, 1), (66, 1), (81, 1), (90, 1), (104, 5), (106, 1), (111, 1), (130, 1), (131, 2), (139, 1), (169, 1), (170, 1), (182, 1), (185, 2), (217, 3), (219, 1), (254, 5), (270, 1), (312, 2), (321, 1), (349, 1), (356, 1), (434, 1), (489, 1), (493, 1), (494, 1), (495, 1), (496, 1), (497, 1), (498, 1), (499, 1), (500, 1), (501, 1), (502, 1), (503, 1), (504, 1), (505, 1), (506, 1), (507, 1), (508, 1), (509, 1), (510, 1), (511, 1), (512, 1), (513, 1), (514, 1), (515, 1)], [(5, 2), (9, 1), (14, 2), (17, 3), (19, 1), (34, 1), (51, 1), (57, 2), (59, 1), (69, 1), (77, 1), (90, 1), (91, 5), (92, 5), (104, 1), (110, 1), (111, 1), (121, 2), (139, 1), (146, 2), (169, 2), (170, 1), (178, 1), (213, 1), (217, 1), (220, 1), (231, 3), (238, 3), (254, 1), (269, 1), (273, 1), (275, 1), (283, 1), (334, 1), (335, 1), (356, 1), (388, 1), (405, 1), (474, 1), (498, 1), (502, 3), (503, 5), (516, 1), (517, 1), (518, 3), (519, 3), (520, 1), (521, 1), (522, 2), (523, 1), (524, 1), (525, 1), (526, 1), (527, 1), (528, 1), (529, 1), (530, 1), (531, 1), (532, 1), (533, 1), (534, 1), (535, 1), (536, 1), (537, 1), (538, 1), (539, 1), (540, 1), (541, 1), (542, 1), (543, 1), (544, 1), (545, 1), (546, 1), (547, 2), (548, 1), (549, 1), (550, 1), (551, 1), (552, 1), (553, 1), (554, 1), (555, 1), (556, 1), (557, 1), (558, 1)], [(2, 2), (5, 1), (16, 1), (90, 1), (91, 2), (100, 1), (107, 1), (108, 1), (139, 1), (147, 1), (158, 1), (171, 1), (182, 1), (213, 1), (300, 1), (335, 1), (356, 1), (386, 1), (559, 1), (560, 2), (561, 1), (562, 1), (563, 1), (564, 1), (565, 1), (566, 1), (567, 1), (568, 1), (569, 1), (570, 1), (571, 1), (572, 1), (573, 1), (574, 1), (575, 1), (576, 1)], [(4, 1), (14, 1), (35, 1), (36, 1), (75, 1), (90, 1), (100, 1), (170, 1), (210, 1), (282, 1), (356, 1), (548, 1), (577, 1), (578, 1), (579, 1), (580, 1), (581, 1), (582, 1)], [(4, 3), (92, 1), (104, 1), (108, 1), (117, 1), (217, 1), (225, 2), (238, 1), (285, 1), (556, 1), (583, 1), (584, 1), (585, 1), (586, 1)], [(1, 1), (2, 1), (4, 5), (5, 3), (7, 1), (17, 2), (31, 1), (35, 1), (50, 1), (52, 1), (56, 1), (74, 1), (91, 1), (92, 1), (110, 1), (124, 1), (139, 1), (141, 1), (154, 1), (157, 1), (168, 1), (196, 1), (229, 1), (272, 1), (279, 1), (300, 1), (338, 1), (416, 1), (472, 1), (473, 1), (587, 2), (588, 1), (589, 1), (590, 2), (591, 1), (592, 1), (593, 1), (594, 1), (595, 1), (596, 1), (597, 1), (598, 1), (599, 1), (600, 1), (601, 1), (602, 1), (603, 1), (604, 1), (605, 1), (606, 1), (607, 1)], [(4, 2), (7, 2), (11, 1), (25, 1), (26, 1), (34, 1), (35, 1), (36, 1), (52, 2), (91, 1), (104, 1), (108, 1), (112, 1), (147, 2), (154, 1), (187, 1), (199, 1), (228, 1), (231, 1), (238, 1), (252, 1), (264, 1), (292, 1), (335, 1), (405, 1), (560, 1), (571, 2), (572, 2), (608, 1), (609, 1), (610, 1), (611, 1), (612, 1), (613, 1), (614, 1), (615, 1), (616, 1), (617, 1), (618, 1), (619, 1), (620, 1), (621, 1), (622, 1), (623, 1), (624, 1), (625, 1), (626, 1), (627, 1), (628, 1)]]
|
qiskit/advanced/aer/7_matrix_product_state_method.ipynb
|
###Markdown
Trusted Notebook" align="middle"> QasmSimulator: matrix product state simulation method Simulation methodsThe `QasmSimulator` has several simulation methods including `statevector`, `stabilizer`, `extended_stabilizer` and `matrix_product_state`. Each of these determines the internal representation of the quantum circuit and the algorithms used to process the quantum operations. They each have advantages and disadvantages, and choosing the best method is a matter of investigation.In this tutorial, we focus on the `matrix product state simulation method`. Matrix product state simulation methodThis simulation method is based on the concept of `matrix product states`. This structure was initially proposed in the paper *Efficient classical simulation of slightly entangled quantum computations* by Vidal in https://arxiv.org/abs/quant-ph/0301063. There are additional papers that describe the structure in more detail, for example *The density-matrix renormalization group in the age of matrix product states* by Schollwoeck https://arxiv.org/abs/1008.3477. A pure quantum state is usually described as a state vector, by the expression $|\psi\rangle = \sum_{i_1=0}^1 {\ldots} \sum_{i_n=0}^1 c_{i_1 \ldots i_n} |i_i\rangle {\otimes} {\ldots} {\otimes} |i_n\rangle$.The state vector representation implies an exponential size representation, regardless of the actual circuit. Every quantum gate operating on this representation requires exponential time and memory.The matrix product state (MPS) representation offers a local representation, in the form:$\Gamma^{[1]} \lambda^{[1]} \Gamma^{[2]} \lambda^{[2]}\ldots \Gamma^{[1]} \lambda^{[n-1]} \Gamma^{[n]}$, such that all the information contained in the $c_{i_1 \ldots i_n}$, can be generated out of the MPS representation. Every $\Gamma^{[i]}$ is a tensor of complex numbers that represents qubit $i$. Every $\lambda^{[i]}$ is a matrix of real numbers that is used to normalize the amplitudes of qubits $i$ and $i+1$. Single-qubit gates operate only on the relevant tensor. Two-qubit gates operate on consecutive qubits $i$ and $i+1$. This involves a tensor-contract operation over $\lambda^{[i-1]}$, $\Gamma^{[i-1]}$, $\lambda^{[i]}$, $\Gamma^{[i+1]}$ and $\lambda^{[i+1]}$, that creates a single tensor. We apply the gate to this tensor, and then decompose back to the original structure. This operation may increase the size of the respective tensors. Gates that involve two qubits that are not consecutive, require a series of swap gates to bring the two qubits next to each other and then the reverse swaps. In the worst case, the tensors may grow exponentially. However, the size of the overall structure remains 'small' for circuits that do not have 'many' two-qubit gates. This allows much more efficient operations in circuits with relatively 'low' entanglement. Characterizing when to use this method over other methods is a subject of current research. Using the matrix product state simulation methodThe matrix product state simulation method is invoked in the `QasmSimulator` by setting the `simulation_method`. Other than that, all operations are controlled by the `QasmSimulator` itself, as in the following example:
###Code
import numpy as np
# Import Qiskit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import Aer, execute
from qiskit.providers.aer import QasmSimulator
# Construct quantum circuit
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0,1], [0,1])
# Select the QasmSimulator from the Aer provider
simulator = Aer.get_backend('qasm_simulator')
# Define the simulation method
backend_opts_mps = {"method":"matrix_product_state"}
# Execute and get counts, using the matrix_product_state method
result = execute(circ, simulator, backend_options=backend_opts_mps).result()
counts = result.get_counts(circ)
counts
###Output
_____no_output_____
###Markdown
To see the internal state vector of the circuit, we can import the snapshot files:
###Code
from qiskit.extensions.simulator import Snapshot
from qiskit.extensions.simulator.snapshot import snapshot
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
# Define a snapshot that shows the current state vector
circ.snapshot('my_sv', snapshot_type='statevector')
circ.measure([0,1], [0,1])
# Execute
job_sim = execute([circ], QasmSimulator(), backend_options=backend_opts_mps)
result = job_sim.result()
res = result.results
#print the state vector
statevector = res[0].data.snapshots.statevector
statevector['my_sv']
result.get_counts()
###Output
_____no_output_____
###Markdown
Running circuits using the matrix product state simulation method can be fast, relative to other methods. However, if we generate the state vector during the execution, then the conversion to state vector is, of course, exponential in memory and time, and therefore we don't benefit from using this method. We can benefit if we only do operations that don't require printing the full state vector. For example, if we run a circuit and then take measurement. The circuit below has 200 qubits. We create an `EPR state` involving all these qubits. Although this state is highly entangled, it is handled well by the matrix product state method, because there are effectively only two states. We can handle more qubits than this, but execution may take a few minutes. Try running a similar circuit with 500 qubits! Or maybe even 1000 (you can get a cup of coffee while waiting).
###Code
num_qubits = 200
qr = QuantumRegister(num_qubits)
cr = ClassicalRegister(num_qubits)
circ = QuantumCircuit(qr, cr)
# Create EPR state
circ.h(qr[0])
for i in range (0,num_qubits-1):
circ.cx(qr[i], qr[i+1])
# Measure
circ.measure(qr, cr)
job_sim = execute([circ], QasmSimulator(), backend_options=backend_opts_mps)
result = job_sim.result()
print("Time taken: {} sec".format(result.time_taken))
result.get_counts()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
 QasmSimulator: matrix product state simulation method Simulation methodsThe `QasmSimulator` has several simulation methods including `statevector`, `stabilizer`, `extended_stabilizer` and `matrix_product_state`. Each of these determines the internal representation of the quantum circuit and the algorithms used to process the quantum operations. They each have advantages and disadvantages, and choosing the best method is a matter of investigation.In this tutorial, we focus on the `matrix product state simulation method`. Matrix product state simulation methodThis simulation method is based on the concept of `matrix product states`. This structure was initially proposed in the paper *Efficient classical simulation of slightly entangled quantum computations* by Vidal in https://arxiv.org/abs/quant-ph/0301063. There are additional papers that describe the structure in more detail, for example *The density-matrix renormalization group in the age of matrix product states* by Schollwoeck https://arxiv.org/abs/1008.3477. A pure quantum state is usually described as a state vector, by the expression $|\psi\rangle = \sum_{i_1=0}^1 {\ldots} \sum_{i_n=0}^1 c_{i_1 \ldots i_n} |i_i\rangle {\otimes} {\ldots} {\otimes} |i_n\rangle$.The state vector representation implies an exponential size representation, regardless of the actual circuit. Every quantum gate operating on this representation requires exponential time and memory.The matrix product state (MPS) representation offers a local representation, in the form:$\Gamma^{[1]} \lambda^{[1]} \Gamma^{[2]} \lambda^{[2]}\ldots \Gamma^{[1]} \lambda^{[n-1]} \Gamma^{[n]}$, such that all the information contained in the $c_{i_1 \ldots i_n}$, can be generated out of the MPS representation. Every $\Gamma^{[i]}$ is a tensor of complex numbers that represents qubit $i$. Every $\lambda^{[i]}$ is a matrix of real numbers that is used to normalize the amplitudes of qubits $i$ and $i+1$. Single-qubit gates operate only on the relevant tensor. Two-qubit gates operate on consecutive qubits $i$ and $i+1$. This involves a tensor-contract operation over $\lambda^{[i-1]}$, $\Gamma^{[i-1]}$, $\lambda^{[i]}$, $\Gamma^{[i+1]}$ and $\lambda^{[i+1]}$, that creates a single tensor. We apply the gate to this tensor, and then decompose back to the original structure. This operation may increase the size of the respective tensors. Gates that involve two qubits that are not consecutive, require a series of swap gates to bring the two qubits next to each other and then the reverse swaps. In the worst case, the tensors may grow exponentially. However, the size of the overall structure remains 'small' for circuits that do not have 'many' two-qubit gates. This allows much more efficient operations in circuits with relatively 'low' entanglement. Characterizing when to use this method over other methods is a subject of current research. Using the matrix product state simulation methodThe matrix product state simulation method is invoked in the `QasmSimulator` by setting the `simulation_method`. Other than that, all operations are controlled by the `QasmSimulator` itself, as in the following example:
###Code
import numpy as np
# Import Qiskit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import Aer, execute
from qiskit.providers.aer import QasmSimulator
# Construct quantum circuit
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0,1], [0,1])
# Select the QasmSimulator from the Aer provider
simulator = Aer.get_backend('qasm_simulator')
# Define the simulation method
backend_opts_mps = {"method":"matrix_product_state"}
# Execute and get counts, using the matrix_product_state method
result = execute(circ, simulator, backend_options=backend_opts_mps).result()
counts = result.get_counts(circ)
counts
###Output
_____no_output_____
###Markdown
To see the internal state vector of the circuit, we can import the snapshot files:
###Code
from qiskit.extensions.simulator import Snapshot
from qiskit.extensions.simulator.snapshot import snapshot
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
# Define a snapshot that shows the current state vector
circ.snapshot('my_sv', snapshot_type='statevector')
circ.measure([0,1], [0,1])
# Execute
job_sim = execute([circ], QasmSimulator(), backend_options=backend_opts_mps)
result = job_sim.result()
res = result.results
#print the state vector
statevector = res[0].data.snapshots.statevector
statevector['my_sv']
result.get_counts()
###Output
_____no_output_____
###Markdown
Running circuits using the matrix product state simulation method can be fast, relative to other methods. However, if we generate the state vector during the execution, then the conversion to state vector is, of course, exponential in memory and time, and therefore we don't benefit from using this method. We can benefit if we only do operations that don't require printing the full state vector. For example, if we run a circuit and then take measurement. The circuit below has 200 qubits. We create an `EPR state` involving all these qubits. Although this state is highly entangled, it is handled well by the matrix product state method, because there are effectively only two states. We can handle more qubits than this, but execution may take a few minutes. Try running a similar circuit with 500 qubits! Or maybe even 1000 (you can get a cup of coffee while waiting).
###Code
num_qubits = 50
qr = QuantumRegister(num_qubits)
cr = ClassicalRegister(num_qubits)
circ = QuantumCircuit(qr, cr)
# Create EPR state
circ.h(qr[0])
for i in range (0,num_qubits-1):
circ.cx(qr[i], qr[i+1])
# Measure
circ.measure(qr, cr)
job_sim = execute([circ], QasmSimulator(), backend_options=backend_opts_mps)
result = job_sim.result()
print("Time taken: {} sec".format(result.time_taken))
result.get_counts()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Trusted Notebook" align="middle"> QasmSimulator: matrix product state simulation method Simulation methodsThe QasmSimulator has several simulation methods including `statevector`, `stabilizer`, `extended_stabilizer` and `matrix_product_state`. Each of these determines the internal representation of the quantum circuit and the algorithms used to process the quantum operations. They each have advantages and disadvantages, and choosing the best method is a matter of investigation.In this tutorial, we focus on the `matrix product state simulation method`. Matrix product state simulation methodThis simulation method is based on the concept of `matrix product states`. This structure was initially proposed in the paper *Efficient classical simulation of slightly entangled quantum computations* by Vidal in https://arxiv.org/abs/quant-ph/0301063. There are additional papers that describe the structure in more detail, for example *The density-matrix renormalization group in the age of matrix product states* by Schollwoeck https://arxiv.org/abs/1008.3477. A pure quantum state is usually described as a state vector, by the expression $|\psi\rangle = \sum_{i_1=0}^1 {\ldots} \sum_{i_n=0}^1 c_{i_1 \ldots i_n} |i_i\rangle {\otimes} {\ldots} {\otimes} |i_n\rangle$.The state vector representation implies an exponential size representation, regardless of the actual circuit. Every quantum gate operating on this representation requires exponential time and memory.The matrix product state (MPS) representation offers a local representation, in the form:$\Gamma^{[1]} \lambda^{[1]} \Gamma^{[2]} \lambda^{[2]}\ldots \Gamma^{[1]} \lambda^{[n-1]} \Gamma^{[n]}$, such that all the information contained in the $c_{i_1 \ldots i_n}$, can be generated out of the MPS representation.. Every $\Gamma^{[i]}$ is a tensor of complex numbers that represents qubit $i$. Every $\lambda^{[i]}$ is a matrix of real numbers that is used to normalize the amplitudes of qubits $i$ and $i+1$. Single-qubit gates operate only on the relevant tensor. Two-qubit gates operate on consecutive qubits $i$ and $i+1$. This involves a tensor-contract operation over $\lambda^{[i-1]}$, $\Gamma^{[i-1]}$, $\lambda^{[i]}$, $\Gamma^{[i+1]}$ and $\lambda^{[i+1]}$, that creates a single tensor. We apply the gate to this tensor, and then decompose back to the original structure. This operation may increase the size of the respective tensors. Gates that involve two qubits that are not consecutive, require a series of swap gates to bring the two qubits next to each other and then the reverse swaps. In the worst case, the tensors may grow exponentially. However, the size of the overall structure remains 'small' for circuits that do not have 'many' two-qubit gates. This allows much more efficient operations in circuits with relatively 'low' entanglement. Characterizing when to use this method over other methods is a subject of current research. Using the matrix product state simulation methodThe matrix product state simulation method is invoked in the qasm simulator by setting the `simulation_method`. Other than that, all operations are controlled by the qasm simulator itself, as in the following example:
###Code
import numpy as np
# Import Qiskit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import Aer, execute
from qiskit.providers.aer import QasmSimulator
# Construct quantum circuit
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0,1], [0,1])
# Select the QasmSimulator from the Aer provider
simulator = Aer.get_backend('qasm_simulator')
# Define the simulation method
backend_opts_mps = {"method":"matrix_product_state"}
# Execute and get counts, using the matrix_product_state method
result = execute(circ, simulator, backend_options=backend_opts_mps).result()
counts = result.get_counts(circ)
counts
###Output
_____no_output_____
###Markdown
To see the internal state vector of the circuit, we can import the snapshot files:
###Code
from qiskit.extensions.simulator import Snapshot
from qiskit.extensions.simulator.snapshot import snapshot
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
# Define a snapshot that shows the current state vector
circ.snapshot('my_sv', snapshot_type='statevector')
circ.measure([0,1], [0,1])
# Execute
job_sim = execute([circ], QasmSimulator(), backend_options=backend_opts_mps)
result = job_sim.result()
res = result.results
#print the state vector
statevector = res[0].data.snapshots.statevector
statevector['my_sv']
result.get_counts()
###Output
_____no_output_____
###Markdown
Running circuits using the matrix product state simulation method can be fast, relative to other methods. However, if we generate the state vector during the execution, then the conversion to state vector is, of course, exponential in memory and time, and therefore we don't benefit from using this method. We can benefit if we only do operations that don't require printing the full statevector. For example, if we run a circuit and then take measurement. The circuit below has 200 qubits. We create an `EPR state` involving all these qubits. Although this state is highly entangled, it is handled well by the matrix product state method, because there are effectively only two states. We can handle more qubits than this, but execution may take a few minutes. Try running a similar circuit with 500 qubits! or maybe even 1000 (you can get a cup of coffee while waiting).
###Code
num_qubits = 200
qr = QuantumRegister(num_qubits)
cr = ClassicalRegister(num_qubits)
circ = QuantumCircuit(qr, cr)
# Create EPR state
circ.h(qr[0])
for i in range (0,num_qubits-1):
circ.cx(qr[i], qr[i+1])
# Measure
circ.measure(qr, cr)
job_sim = execute([circ], QasmSimulator(), backend_options=backend_opts_mps)
result = job_sim.result()
print("Time taken: {} sec".format(result.time_taken))
result.get_counts()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
|
site/ko/tutorials/load_data/images.ipynb
|
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
이미지 로드 TensorFlow.org 에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub)에서 소스보기 노트북에 다운로드 이 튜토리얼은 `tf.data`를 사용하는 이미지 데이터세트를 불러오는 방법의 간단한 예시를 제공합니다.이러한 예시에서 사용된 데이터세트는 데이터세트마다 이미지의 하나의 클래스르 가지고 이미지의 디렉토리로 분배됩니다. 설치
###Code
import tensorflow as tf
AUTOTUNE = tf.data.experimental.AUTOTUNE
import IPython.display as display
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import os
tf.__version__
###Output
_____no_output_____
###Markdown
이미지 검색트레이닝을 시작하기 전에 네트워크에 인식하고 싶은 새로운 클래스에 대해 학습하기 위한 이미지세트가 필요합니다. 구글에서 꽃 이미지를 허가하는 크리에이티브 커먼즈에 대한 기록물을 사용할 수 있습니다.참고: 모든 이미지에는 CC-BY 라이센스가 부여되며, 생성자는`LICENSE.txt`파일에 나열됩니다.
###Code
import pathlib
data_dir = tf.keras.utils.get_file(origin='https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
fname='flower_photos', untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
다운로드(218MB)후, 꽃 이미지를 복사할 수 있습니다.디렉토리에는 클래스마다 5개으 서브 디렉토리가 있습니다.
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
image_count
CLASS_NAMES = np.array([item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"])
CLASS_NAMES
###Output
_____no_output_____
###Markdown
각 디렉토리에는 그 타입의 꽃 이미지가 포함되어 있습니다. 이것은 장미입니다.
###Code
roses = list(data_dir.glob('roses/*'))
for image_path in roses[:3]:
display.display(Image.open(str(image_path)))
###Output
_____no_output_____
###Markdown
keras.preprocessing을 사용하여 로드 이미지를 로드하는 간단한 방법은 `tf.keras.preprocessing`를 사용하는 것입니다.
###Code
# 1./255는 [0,1] 범위에서 unit8에서 float32로 변환합니다.
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
###Output
_____no_output_____
###Markdown
로더의 매개변수를 정의합니다.
###Code
BATCH_SIZE = 32
IMG_HEIGHT = 224
IMG_WIDTH = 224
STEPS_PER_EPOCH = np.ceil(image_count/BATCH_SIZE)
train_data_gen = image_generator.flow_from_directory(directory=str(data_dir),
batch_size=BATCH_SIZE,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
classes = list(CLASS_NAMES))
###Output
_____no_output_____
###Markdown
배치를 점검합니다.
###Code
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10,10))
for n in range(25):
ax = plt.subplot(5,5,n+1)
plt.imshow(image_batch[n])
plt.title(CLASS_NAMES[label_batch[n]==1][0].title())
plt.axis('off')
image_batch, label_batch = next(train_data_gen)
show_batch(image_batch, label_batch)
###Output
_____no_output_____
###Markdown
tf.data를 사용하여 로드 위의 `keras.preprocessing`방법은 편리하지만 세가지 결점이 있습니다. 1. 늦습니다. 밑의 성과 부분을 참고합니다.2. 섬세한 컨트롤이 결여되어 있습니다.3. TensorFlow의 나머지 부분과 잘 통합되어 있지 않습니다. 파일을 `tf.data.dataset`로 로드하려면 데이터세트에 먼저 파일 경로의 데이터세트를 생성해야합니다.
###Code
list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*'))
for f in list_ds.take(5):
print(f.numpy())
###Output
_____no_output_____
###Markdown
파일 경로를 (`이미지, 레이블`) 쌍으로 변환하는 짧은 pure-tensorflow 함수를 사용합니다.
###Code
def get_label(file_path):
# 경로를 경로 구성요소 목록으로 변환합니다.
parts = tf.strings.split(file_path, os.path.sep)
# 마지막 두 번째 클래스 디렉터리입니다.
return parts[-2] == CLASS_NAMES
def decode_img(img):
# 압축된 문자열을 3D unit8 텐서로 변환합니다.
img = tf.image.decode_jpeg(img, channels=3)
# [0,1]범위의 float으로 변환하려면 convert_image_dtype을 사용합니다.
img = tf.image.convert_image_dtype(img, tf.float32)
# 이미지 크기를 원하는 크기로 조정합니다.
return tf.image.resize(img, [IMG_HEIGHT, IMG_WIDTH])
def process_path(file_path):
label = get_label(file_path)
# 파일에서 데이터를 문자열로 로드합니다.
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
###Output
_____no_output_____
###Markdown
Dataset.map을 사용하여 `이미지, 레이블` 쌍의 데이터세트를 만듭니다.
###Code
# 다중 영상이 병렬로 로드/처리되도록 num_parallel_calls로 설정합니다.
labeled_ds = list_ds.map(process_path, num_parallel_calls=AUTOTUNE)
for image, label in labeled_ds.take(1):
print("이미지 모양: ", image.numpy().shape)
print("레이블: ", label.numpy())
###Output
_____no_output_____
###Markdown
기본 훈련법 이 데이터세트에서 모델을 트레이닝하려면 다음이 요구됩니다.* 잘 섞습니다.* 일괄 처리합니다.* 최대한 빨리 이용할 수 있는 배치를 합니다.이 기능은 `tf.data` api를 사용하고 쉽게 추가할 수 있습니다.
###Code
def prepare_for_training(ds, cache=True, shuffle_buffer_size=1000):
# 이것은 작은 데이터세트로, 한 번만 로드하고 메모리에 보관합니다.
# 메모리에 맞지 않는 데이터세트의 사전 처리 작업을 캐싱하려면 .cache(filename)을
# 사용합니다.
if cache:
if isinstance(cache, str):
ds = ds.cache(cache)
else:
ds = ds.cache()
ds = ds.shuffle(buffer_size=shuffle_buffer_size)
# 계속 반복합니다.
ds = ds.repeat()
ds = ds.batch(BATCH_SIZE)
# `prefetch`는 모델이 훈련하느 동안 데이터세트가 백그라운드에 배치를 가져올 수
# 있도록 합니다.
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = prepare_for_training(labeled_ds)
image_batch, label_batch = next(iter(train_ds))
show_batch(image_batch.numpy(), label_batch.numpy())
###Output
_____no_output_____
###Markdown
성과주의: 이 부분에서는 성과에 도움이되는 간단한 기술을 몇가지 소개하겠습니다. 상세한 가이드에 대해서는 [Input Pipeline Performance](../../guide/performance/datasets) 를 참고합니다. 먼저 데이터세트의 성과를 체크하는 기능을 살펴보겠습니다.
###Code
import time
default_timeit_steps = 1000
def timeit(ds, steps=default_timeit_steps):
start = time.time()
it = iter(ds)
for i in range(steps):
batch = next(it)
if i%10 == 0:
print('.',end='')
print()
end = time.time()
duration = end-start
print("{} batches: {} s".format(steps, duration))
print("{:0.5f} Images/s".format(BATCH_SIZE*steps/duration))
###Output
_____no_output_____
###Markdown
두 데이터 생성기의 속도를 비교해 봅니다.
###Code
# `keras.preprocessing`을 이용했을 때
timeit(train_data_gen)
# `tf.data`을 이용했을 때
timeit(train_ds)
###Output
_____no_output_____
###Markdown
성과 향상의 대부분은 `.cache`의 사용에서 발생합니다.
###Code
uncached_ds = prepare_for_training(labeled_ds, cache=False)
timeit(uncached_ds)
###Output
_____no_output_____
###Markdown
데이터세트가 메모리에 맞지 않는 경우에는 캐시파일을 사용해 몇 가지 이점을 유지합니다.
###Code
filecache_ds = prepare_for_training(labeled_ds, cache="./flowers.tfcache")
timeit(filecache_ds)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
이미지 로드 TensorFlow.org에서 보기 Google Colab에서 실행 GitHub에서 소스 보기 노트북 다운로드 이 튜토리얼은 두 가지 방법으로 이미지 데이터세트를 로드하고 전처리하는 방법을 보여줍니다. 먼저, 고급 Keras 전처리 [유틸리티](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) 및 [레이어](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing)를 사용합니다. 다음으로 [tf.data](https://www.tensorflow.org/guide/data)를 사용하여 처음부터 자체 입력 파이프라인을 작성합니다. 설정
###Code
import numpy as np
import os
import PIL
import PIL.Image
import tensorflow as tf
import tensorflow_datasets as tfds
print(tf.__version__)
###Output
_____no_output_____
###Markdown
꽃 데이터세트 다운로드하기이 튜토리얼에서는 수천 장의 꽃 사진 데이터세트를 사용합니다. 꽃 데이터세트에는 클래스당 하나씩 5개의 하위 디렉토리가 있습니다.```flowers_photos/ daisy/ dandelion/ roses/ sunflowers/ tulips/``` 참고: 모든 이미지에는 CC-BY 라이선스가 있으며 크리에이터는 LICENSE.txt 파일에 나열됩니다.
###Code
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file(origin=dataset_url,
fname='flower_photos',
untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
다운로드한 후 (218MB), 이제 꽃 사진의 사본을 사용할 수 있습니다. 총 3670개의 이미지가 있습니다.
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
###Output
_____no_output_____
###Markdown
각 디렉토리에는 해당 유형의 꽃 이미지가 포함되어 있습니다. 다음은 장미입니다.
###Code
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
###Output
_____no_output_____
###Markdown
keras.preprocessing을 사용하여 로드하기[image_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory)를 사용하여 이들 이미지를 디스크에 로드해 보겠습니다. 참고: 이 섹션에 소개된 Keras Preprocesing 유틸리티 및 레이어는 현재 실험 중이며 변경될 수 있습니다. 데이터세트 만들기 로더를 위해 일부 매개변수를 정의합니다.
###Code
batch_size = 32
img_height = 180
img_width = 180
###Output
_____no_output_____
###Markdown
모델을 개발할 때 검증 분할을 사용하는 것이 좋습니다. 훈련에 이미지의 80%를 사용하고 검증에 20%를 사용합니다.
###Code
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
이러한 데이터세트의 `class_names` 속성에서 클래스 이름을 찾을 수 있습니다.
###Code
class_names = train_ds.class_names
print(class_names)
###Output
_____no_output_____
###Markdown
데이터 시각화하기훈련 데이터세트의 처음 9개 이미지는 다음과 같습니다.
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
###Output
_____no_output_____
###Markdown
이러한 데이터세트를 사용하는 모델을 `model.fit`(이 튜토리얼의 뒷부분에 표시)에 전달하여 모델을 훈련할 수 있습니다. 원하는 경우, 데이터세트를 수동으로 반복하고 이미지 배치를 검색할 수도 있습니다.
###Code
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
`image_batch`는 형상 `(32, 180, 180, 3)`의 텐서입니다. 이것은 형상 `180x180x3`의 32개 이미지 배치입니다(마지막 치수는 색상 채널 RGB를 나타냄). `label_batch`는 형상 `(32,)`의 텐서이며 32개 이미지에 해당하는 레이블입니다. 참고: 이들 텐서 중 하나에서 `.numpy()`를 호출하여 `numpy.ndarray`로 변환할 수 있습니다. 데이터 표준화하기 RGB 채널 값은 `[0, 255]` 범위에 있습니다. 신경망에는 이상적이지 않습니다. 일반적으로 입력 값을 작게 만들어야 합니다. 여기서는 Rescaling 레이어를 사용하여 값이 `[0, 1]`에 있도록 표준화합니다.
###Code
from tensorflow.keras import layers
normalization_layer = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)
###Output
_____no_output_____
###Markdown
이 레이어를 사용하는 방법에는 두 가지가 있습니다. map을 호출하여 데이터세트에 레이어를 적용할 수 있습니다.
###Code
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixels values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
###Output
_____no_output_____
###Markdown
또는 모델 정의 내에 레이어를 포함하여 배포를 단순화할 수 있습니다. 여기서는 두 번째 접근 방식을 사용할 것입니다. 참고: 픽셀 값을 `[-1,1]`으로 조정하려면 대신 `Rescaling(1./127.5, offset=-1)`를 작성할 수 있습니다. 참고: 이전에 `image_dataset_from_directory`의 `image_size` 인수를 사용하여 이미지 크기를 조정했습니다. 모델에 크기 조정 논리를 포함하려면 [Resizing](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) 레이어를 대신 사용할 수 있습니다. 성능을 위한 데이터세트 구성하기버퍼링된 프리페치를 사용하여 I/O가 차단되지 않고 디스크에서 데이터를 생성할 수 있도록 합니다. 데이터를 로드할 때 사용해야 하는 두 가지 중요한 메서드입니다.`.cache()`는 첫 번째 epoch 동안 디스크에서 이미지를 로드한 후 이미지를 메모리에 유지합니다. 이렇게 하면 모델을 훈련하는 동안 데이터세트가 병목 상태가 되지 않습니다. 데이터세트가 너무 커서 메모리에 맞지 않는 경우, 이 메서드를 사용하여 성능이 높은 온디스크 캐시를 생성할 수도 있습니다.`.prefetch()`는 훈련 중에 데이터 전처리 및 모델 실행과 겹칩니다.관심 있는 독자는 [데이터 성능 가이드](https://www.tensorflow.org/guide/data_performanceprefetching)에서 두 가지 메서드와 디스크에 데이터를 캐시하는 방법에 대해 자세히 알아볼 수 있습니다.
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
모델 훈련하기완전성을 위해 준비한 데이터세트를 사용하여 간단한 모델을 훈련하는 방법을 보여줍니다. 이 모델은 어떤 식으로든 조정되지 않았습니다. 목표는 방금 만든 데이터세트를 사용하여 역학을 보여주는 것입니다. 이미지 분류에 대한 자세한 내용은 이 [튜토리얼](https://www.tensorflow.org/tutorials/images/classification)을 참조하세요.
###Code
num_classes = 5
model = tf.keras.Sequential([
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(32, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
참고: 몇 가지 epoch에 대해서만 훈련하므로 이 튜토리얼은 빠르게 진행됩니다.
###Code
model.fit(
train_ds,
batch_size=batch_size,
validation_data=val_ds,
epochs=3
)
###Output
_____no_output_____
###Markdown
참고: `model.fit`을 사용하는 대신 사용자 정의 훈련 루프를 작성할 수도 있습니다. 자세한 내용은 이 [튜토리얼](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch)을 참조하세요. 검증 정확성이 훈련 정확성에 비해 낮으므로 모델이 과대적합되었음을 알 수 있습니다. 이 [튜토리얼](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit)에서 과대적합 및 축소 방법에 대해 자세히 알아볼 수 있습니다. 미세 제어를 위해 tf.data 사용하기 위의 keras.preprocessing 유틸리티는 이미지의 디렉토리에서 `tf.data.Dataset`을 작성하는 편리한 방법입니다. 보다 세밀한 제어를 위해 `tf.data`을 사용하여 자체 입력 파이프라인을 작성할수 있습니다. 이 섹션에서는 이전에 다운로드한 zip 파일 경로부터 시작하여 이를 수행하는 방법을 보여줍니다.
###Code
list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*'), shuffle=False)
list_ds = list_ds.shuffle(image_count, reshuffle_each_iteration=False)
for f in list_ds.take(5):
print(f.numpy())
###Output
_____no_output_____
###Markdown
파일의 트리 구조를 사용하여 `class_names` 목록을 컴파일할 수 있습니다.
###Code
class_names = np.array(sorted([item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"]))
print(class_names)
###Output
_____no_output_____
###Markdown
데이터세트를 훈련 및 검증으로 분할합니다.
###Code
val_size = int(image_count * 0.2)
train_ds = list_ds.skip(val_size)
val_ds = list_ds.take(val_size)
###Output
_____no_output_____
###Markdown
다음과 같이 각 데이터세트의 길이를 볼 수 있습니다.
###Code
print(tf.data.experimental.cardinality(train_ds).numpy())
print(tf.data.experimental.cardinality(val_ds).numpy())
###Output
_____no_output_____
###Markdown
파일 경로를 `(img, label)` 쌍으로 변환하는 간단한 함수를 작성합니다.
###Code
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
one_hot = parts[-2] == class_names
# Integer encode the label
return tf.argmax(one_hot)
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.image.decode_jpeg(img, channels=3)
# resize the image to the desired size
return tf.image.resize(img, [img_height, img_width])
def process_path(file_path):
label = get_label(file_path)
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
###Output
_____no_output_____
###Markdown
`Dataset.map`을 사용하여 `image, label` 쌍의 데이터세트를 작성합니다.
###Code
# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
train_ds = train_ds.map(process_path, num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(process_path, num_parallel_calls=AUTOTUNE)
for image, label in train_ds.take(1):
print("Image shape: ", image.numpy().shape)
print("Label: ", label.numpy())
###Output
_____no_output_____
###Markdown
성능을 위한 데이터세트 구성하기 이 데이터세트로 모델을 훈련하려면 데이터에 대해 다음이 필요합니다.- 잘 섞는다.- 배치 처리한다.- 가능한 빨리 배치를 사용할 수 있어야 한다.이러한 기능은 `tf.data` API를 사용하여 추가할 수 있습니다. 자세한 내용은 [입력 파이프라인 성능](../../guide/performance/datasets) 가이드를 참조하세요.
###Code
def configure_for_performance(ds):
ds = ds.cache()
ds = ds.shuffle(buffer_size=1000)
ds = ds.batch(batch_size)
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
###Output
_____no_output_____
###Markdown
데이터 시각화하기이 데이터세트를 이전에 작성한 데이터세트와 유사하게 시각화할 수 있습니다.
###Code
image_batch, label_batch = next(iter(train_ds))
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image_batch[i].numpy().astype("uint8"))
label = label_batch[i]
plt.title(class_names[label])
plt.axis("off")
###Output
_____no_output_____
###Markdown
모델 계속 훈련하기위의 `keras.preprocessing`에 의해 작성된 것과 유사한 `tf.data.Dataset`를 수동으로 빌드했습니다. 모델 훈련을 계속할 수 있습니다. 이전과 마찬가지로 실행 시간을 짧게 유지하기 위해 몇 가지 epoch 동안 훈련합니다.
###Code
model.fit(
train_ds,
validation_data=val_ds,
epochs=3
)
###Output
_____no_output_____
###Markdown
TensorFlow 데이터세트 사용하기이 튜토리얼에서는 지금까지 디스크에서 데이터를 로드하는 데 중점을 두었습니다. [TensorFlow 데이터세트](https://www.tensorflow.org/datasets)에서 다운로드하기 쉬운 대규모 데이터세트 [카탈로그](https://www.tensorflow.org/datasets)를 탐색하여 사용할 데이터세트를 찾을 수도 있습니다. 이전에 Flowers 데이터세트를 디스크에서 로드했으므로 TensorFlow 데이터세트로 가져오는 방법을 살펴보겠습니다. TensorFlow 데이터세트를 사용하여 꽃 [데이터세트](https://www.tensorflow.org/datasets/catalog/tf_flowers)를 다운로드합니다.
###Code
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
###Output
_____no_output_____
###Markdown
꽃 데이터세트에는 5개의 클래스가 있습니다.
###Code
num_classes = metadata.features['label'].num_classes
print(num_classes)
###Output
_____no_output_____
###Markdown
데이터세트에서 이미지를 검색합니다.
###Code
get_label_name = metadata.features['label'].int2str
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
###Output
_____no_output_____
###Markdown
이전과 마찬가지로, 성능을 위해 각 데이터세트를 일괄 처리, 셔플 및 구성해야 합니다.
###Code
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
test_ds = configure_for_performance(test_ds)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
이미지 로드 TensorFlow.org에서 보기 Google Colab에서 실행 GitHub에서 소스 보기 노트북 다운로드 이 튜토리얼은 두 가지 방법으로 이미지 데이터세트를 로드하고 전처리하는 방법을 보여줍니다. 먼저, 고급 Keras 전처리 [유틸리티](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) 및 [레이어](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing)를 사용합니다. 다음으로 [tf.data](https://www.tensorflow.org/guide/data)를 사용하여 처음부터 자체 입력 파이프라인을 작성합니다. 설정
###Code
import numpy as np
import os
import PIL
import PIL.Image
import tensorflow as tf
import tensorflow_datasets as tfds
print(tf.__version__)
###Output
_____no_output_____
###Markdown
꽃 데이터세트 다운로드하기이 튜토리얼에서는 수천 장의 꽃 사진 데이터세트를 사용합니다. 꽃 데이터세트에는 클래스당 하나씩 5개의 하위 디렉토리가 있습니다.```flowers_photos/ daisy/ dandelion/ roses/ sunflowers/ tulips/``` 참고: 모든 이미지에는 CC-BY 라이선스가 있으며 크리에이터는 LICENSE.txt 파일에 나열됩니다.
###Code
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file(origin=dataset_url,
fname='flower_photos',
untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
다운로드한 후 (218MB), 이제 꽃 사진의 사본을 사용할 수 있습니다. 총 3670개의 이미지가 있습니다.
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
###Output
_____no_output_____
###Markdown
각 디렉토리에는 해당 유형의 꽃 이미지가 포함되어 있습니다. 다음은 장미입니다.
###Code
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
###Output
_____no_output_____
###Markdown
keras.preprocessing을 사용하여 로드하기[image_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory)를 사용하여 이들 이미지를 디스크에 로드해 보겠습니다. 참고: 이 섹션에 소개된 Keras Preprocesing 유틸리티 및 레이어는 현재 실험 중이며 변경될 수 있습니다. 데이터세트 만들기 로더를 위해 일부 매개변수를 정의합니다.
###Code
batch_size = 32
img_height = 180
img_width = 180
###Output
_____no_output_____
###Markdown
모델을 개발할 때 검증 분할을 사용하는 것이 좋습니다. 훈련에 이미지의 80%를 사용하고 검증에 20%를 사용합니다.
###Code
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
이러한 데이터세트의 `class_names` 속성에서 클래스 이름을 찾을 수 있습니다.
###Code
class_names = train_ds.class_names
print(class_names)
###Output
_____no_output_____
###Markdown
데이터 시각화하기훈련 데이터세트의 처음 9개 이미지는 다음과 같습니다.
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
###Output
_____no_output_____
###Markdown
이러한 데이터세트를 사용하는 모델을 `model.fit`(이 튜토리얼의 뒷부분에 표시)에 전달하여 모델을 훈련할 수 있습니다. 원하는 경우, 데이터세트를 수동으로 반복하고 이미지 배치를 검색할 수도 있습니다.
###Code
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
`image_batch`는 형상 `(32, 180, 180, 3)`의 텐서입니다. 이것은 형상 `180x180x3`의 32개 이미지 배치입니다(마지막 치수는 색상 채널 RGB를 나타냄). `label_batch`는 형상 `(32,)`의 텐서이며 32개 이미지에 해당하는 레이블입니다. 참고: 이들 텐서 중 하나에서 `.numpy()`를 호출하여 `numpy.ndarray`로 변환할 수 있습니다. 데이터 표준화하기 RGB 채널 값은 `[0, 255]` 범위에 있습니다. 신경망에는 이상적이지 않습니다. 일반적으로 입력 값을 작게 만들어야 합니다. 여기서는 Rescaling 레이어를 사용하여 값이 `[0, 1]`에 있도록 표준화합니다.
###Code
from tensorflow.keras import layers
normalization_layer = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)
###Output
_____no_output_____
###Markdown
이 레이어를 사용하는 방법에는 두 가지가 있습니다. map을 호출하여 데이터세트에 레이어를 적용할 수 있습니다.
###Code
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixels values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
###Output
_____no_output_____
###Markdown
또는 모델 정의 내에 레이어를 포함하여 배포를 단순화할 수 있습니다. 여기서는 두 번째 접근 방식을 사용할 것입니다. 참고: 픽셀 값을 `[-1,1]`으로 조정하려면 대신 `Rescaling(1./127.5, offset=-1)`를 작성할 수 있습니다. 참고: 이전에 `image_dataset_from_directory`의 `image_size` 인수를 사용하여 이미지 크기를 조정했습니다. 모델에 크기 조정 논리를 포함하려면 [Resizing](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) 레이어를 대신 사용할 수 있습니다. 성능을 위한 데이터세트 구성하기버퍼링된 프리페치를 사용하여 I/O가 차단되지 않고 디스크에서 데이터를 생성할 수 있도록 합니다. 데이터를 로드할 때 사용해야 하는 두 가지 중요한 메서드입니다.`.cache()`는 첫 번째 epoch 동안 디스크에서 이미지를 로드한 후 이미지를 메모리에 유지합니다. 이렇게 하면 모델을 훈련하는 동안 데이터세트가 병목 상태가 되지 않습니다. 데이터세트가 너무 커서 메모리에 맞지 않는 경우, 이 메서드를 사용하여 성능이 높은 온디스크 캐시를 생성할 수도 있습니다.`.prefetch()`는 훈련 중에 데이터 전처리 및 모델 실행과 겹칩니다.관심 있는 독자는 [데이터 성능 가이드](https://www.tensorflow.org/guide/data_performanceprefetching)에서 두 가지 메서드와 디스크에 데이터를 캐시하는 방법에 대해 자세히 알아볼 수 있습니다.
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
모델 훈련하기완전성을 위해 준비한 데이터세트를 사용하여 간단한 모델을 훈련하는 방법을 보여줍니다. 이 모델은 어떤 식으로든 조정되지 않았습니다. 목표는 방금 만든 데이터세트를 사용하여 역학을 보여주는 것입니다. 이미지 분류에 대한 자세한 내용은 이 [튜토리얼](https://www.tensorflow.org/tutorials/images/classification)을 참조하세요.
###Code
num_classes = 5
model = tf.keras.Sequential([
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(32, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
참고: 몇 가지 epoch에 대해서만 훈련하므로 이 튜토리얼은 빠르게 진행됩니다.
###Code
model.fit(
train_ds,
batch_size=batch_size,
validation_data=val_ds,
epochs=3
)
###Output
_____no_output_____
###Markdown
참고: `model.fit`을 사용하는 대신 사용자 정의 훈련 루프를 작성할 수도 있습니다. 자세한 내용은 이 [튜토리얼](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch)을 참조하세요. 검증 정확성이 훈련 정확성에 비해 낮으므로 모델이 과대적합되었음을 알 수 있습니다. 이 [튜토리얼](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit)에서 과대적합 및 축소 방법에 대해 자세히 알아볼 수 있습니다. 미세 제어를 위해 tf.data 사용하기 위의 keras.preprocessing 유틸리티는 이미지의 디렉토리에서 `tf.data.Dataset`을 작성하는 편리한 방법입니다. 보다 세밀한 제어를 위해 `tf.data`을 사용하여 자체 입력 파이프라인을 작성할수 있습니다. 이 섹션에서는 이전에 다운로드한 zip 파일 경로부터 시작하여 이를 수행하는 방법을 보여줍니다.
###Code
list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*'))
for f in list_ds.take(5):
print(f.numpy())
###Output
_____no_output_____
###Markdown
파일의 트리 구조를 사용하여 `class_names` 목록을 컴파일할 수 있습니다.
###Code
class_names = np.array(sorted([item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"]))
print(class_names)
###Output
_____no_output_____
###Markdown
데이터세트를 훈련 및 검증으로 분할합니다.
###Code
val_size = int(image_count * 0.2)
train_ds = list_ds.skip(val_size)
val_ds = list_ds.take(val_size)
###Output
_____no_output_____
###Markdown
다음과 같이 각 데이터세트의 길이를 볼 수 있습니다.
###Code
print(tf.data.experimental.cardinality(train_ds).numpy())
print(tf.data.experimental.cardinality(val_ds).numpy())
###Output
_____no_output_____
###Markdown
파일 경로를 `(img, label)` 쌍으로 변환하는 간단한 함수를 작성합니다.
###Code
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
one_hot = parts[-2] == class_names
# Integer encode the label
return tf.argmax(one_hot)
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.image.decode_jpeg(img, channels=3)
# resize the image to the desired size
return tf.image.resize(img, [img_height, img_width])
def process_path(file_path):
label = get_label(file_path)
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
###Output
_____no_output_____
###Markdown
`Dataset.map`을 사용하여 `image, label` 쌍의 데이터세트를 작성합니다.
###Code
# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
train_ds = train_ds.map(process_path, num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(process_path, num_parallel_calls=AUTOTUNE)
for image, label in labeled_ds.take(1):
print("Image shape: ", image.numpy().shape)
print("Label: ", label.numpy())
###Output
_____no_output_____
###Markdown
성능을 위한 데이터세트 구성하기 이 데이터세트로 모델을 훈련하려면 데이터에 대해 다음이 필요합니다.- 잘 섞는다.- 배치 처리한다.- 가능한 빨리 배치를 사용할 수 있어야 한다.이러한 기능은 `tf.data` API를 사용하여 추가할 수 있습니다. 자세한 내용은 [입력 파이프라인 성능](../../guide/performance/datasets) 가이드를 참조하세요.
###Code
def configure_for_performance(ds):
ds = ds.cache()
ds = ds.shuffle(buffer_size=1000)
ds = ds.batch(batch_size)
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
###Output
_____no_output_____
###Markdown
데이터 시각화하기이 데이터세트를 이전에 작성한 데이터세트와 유사하게 시각화할 수 있습니다.
###Code
image_batch, label_batch = next(iter(train_ds))
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image_batch[i].numpy().astype("uint8"))
label = label_batch[i]
plt.title(class_names[label])
plt.axis("off")
###Output
_____no_output_____
###Markdown
모델 계속 훈련하기위의 `keras.preprocessing`에 의해 작성된 것과 유사한 `tf.data.Dataset`를 수동으로 빌드했습니다. 모델 훈련을 계속할 수 있습니다. 이전과 마찬가지로 실행 시간을 짧게 유지하기 위해 몇 가지 epoch 동안 훈련합니다.
###Code
model.fit(
train_ds,
validation_data=val_ds,
epochs=3
)
###Output
_____no_output_____
###Markdown
TensorFlow 데이터세트 사용하기이 튜토리얼에서는 지금까지 디스크에서 데이터를 로드하는 데 중점을 두었습니다. [TensorFlow 데이터세트](https://www.tensorflow.org/datasets)에서 다운로드하기 쉬운 대규모 데이터세트 [카탈로그](https://www.tensorflow.org/datasets)를 탐색하여 사용할 데이터세트를 찾을 수도 있습니다. 이전에 Flowers 데이터세트를 디스크에서 로드했으므로 TensorFlow 데이터세트로 가져오는 방법을 살펴보겠습니다. TensorFlow 데이터세트를 사용하여 꽃 [데이터세트](https://www.tensorflow.org/datasets/catalog/tf_flowers)를 다운로드합니다.
###Code
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
###Output
_____no_output_____
###Markdown
꽃 데이터세트에는 5개의 클래스가 있습니다.
###Code
num_classes = metadata.features['label'].num_classes
print(num_classes)
###Output
_____no_output_____
###Markdown
데이터세트에서 이미지를 검색합니다.
###Code
get_label_name = metadata.features['label'].int2str
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
###Output
_____no_output_____
###Markdown
이전과 마찬가지로, 성능을 위해 각 데이터세트를 일괄 처리, 셔플 및 구성해야 합니다.
###Code
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
test_ds = configure_for_performance(test_ds)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
이미지 로드 및 전처리하기 TensorFlow.org에서 보기 Google Colab에서 실행 GitHub에서 소스 보기 노트북 다운로드 이 튜토리얼은 두 가지 방법으로 이미지 데이터세트를 로드하고 전처리하는 방법을 보여줍니다. 먼저, 고급 Keras 전처리 [유틸리티](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) 및 [레이어](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing)를 사용합니다. 다음으로 [tf.data](https://www.tensorflow.org/guide/data)를 사용하여 처음부터 자체 입력 파이프라인을 작성합니다. 설정
###Code
import numpy as np
import os
import PIL
import PIL.Image
import tensorflow as tf
import tensorflow_datasets as tfds
print(tf.__version__)
###Output
_____no_output_____
###Markdown
꽃 데이터세트 다운로드하기이 튜토리얼에서는 수천 장의 꽃 사진 데이터세트를 사용합니다. 꽃 데이터세트에는 클래스당 하나씩 5개의 하위 디렉토리가 있습니다.```flowers_photos/ daisy/ dandelion/ roses/ sunflowers/ tulips/``` 참고: 모든 이미지에는 CC-BY 라이선스가 있으며 크리에이터는 LICENSE.txt 파일에 나열됩니다.
###Code
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file(origin=dataset_url,
fname='flower_photos',
untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
다운로드한 후 (218MB), 이제 꽃 사진의 사본을 사용할 수 있습니다. 총 3670개의 이미지가 있습니다.
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
###Output
_____no_output_____
###Markdown
각 디렉토리에는 해당 유형의 꽃 이미지가 포함되어 있습니다. 다음은 장미입니다.
###Code
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[1]))
###Output
_____no_output_____
###Markdown
`tf.keras.preprocessing`을 사용하여 로드하기`tf.keras.preprocessing.image_dataset_from_directory`를 사용하여 이러한 이미지를 디스크에서 로드해 보겠습니다. 참고: 이 섹션에 소개된 Keras Preprocesing 유틸리티 및 레이어는 현재 실험 중이며 변경될 수 있습니다. 데이터세트 만들기 로더를 위해 일부 매개변수를 정의합니다.
###Code
batch_size = 32
img_height = 180
img_width = 180
###Output
_____no_output_____
###Markdown
모델을 개발할 때 검증 분할을 사용하는 것이 좋습니다. 이미지의 80%를 훈련에 사용하고 20%를 유효성 검사에 사용합니다.
###Code
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
이러한 데이터세트의 `class_names` 속성에서 클래스 이름을 찾을 수 있습니다.
###Code
class_names = train_ds.class_names
print(class_names)
###Output
_____no_output_____
###Markdown
데이터 시각화하기훈련 데이터세트의 처음 9개 이미지는 다음과 같습니다.
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
###Output
_____no_output_____
###Markdown
이러한 데이터세트를 사용하는 모델을 `model.fit`(이 튜토리얼의 뒷부분에 표시)에 전달하여 모델을 훈련할 수 있습니다. 원하는 경우, 데이터세트를 수동으로 반복하고 이미지 배치를 검색할 수도 있습니다.
###Code
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
`image_batch`는 `(32, 180, 180, 3)` 형상의 텐서이며, `180x180x3` 형상의 32개 이미지 묶음으로 되어 있습니다(마지막 차원은 색상 채널 RGB를 나타냄). `label_batch`는 형상 `(32,)`의 텐서이며 32개 이미지에 해당하는 레이블입니다. 참고: 이들 텐서 중 하나에서 `.numpy()`를 호출하여 `numpy.ndarray`로 변환할 수 있습니다. 데이터 표준화하기 RGB 채널 값은 `[0, 255]` 범위에 있습니다. 이것은 신경망에 이상적이지 않습니다. 일반적으로 입력 값을 작게 만들어야 합니다. 여기서는 `tf.keras.layers.experimental.preprocessing.Rescaling` 레이어를 사용하여 `[0, 1]` 범위에 있도록 값을 표준화합니다.
###Code
normalization_layer = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)
###Output
_____no_output_____
###Markdown
이 레이어를 사용하는 방법에는 두 가지가 있습니다. map을 호출하여 데이터세트에 레이어를 적용할 수 있습니다.
###Code
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixels values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
###Output
_____no_output_____
###Markdown
또는 모델 정의 내에 레이어를 포함하여 배포를 단순화할 수 있습니다. 여기서는 두 번째 접근법을 사용할 것입니다. 참고: 픽셀 값을 `[-1,1]`으로 조정하려면 대신 `Rescaling(1./127.5, offset=-1)`를 작성할 수 있습니다. 참고: 이전에 `tf.keras.preprocessing.image_dataset_from_directory`의 `image_size` 인수를 사용하여 이미지 크기를 조정했습니다. 모델에 크기 조정 논리를 포함하려면 `tf.keras.layers.experimental.preprocessing.Resizing` 레이어를 대신 사용할 수 있습니다. 성능을 위한 데이터세트 구성하기버퍼링된 프리페치를 사용하여 I/O를 차단하지 않고 디스크에서 데이터를 생성할 수 있도록 하겠습니다. 데이터를 로드할 때 다음 두 가지 중요한 메서드를 사용해야 합니다.`.cache()`는 첫 번째 epoch 동안 디스크에서 이미지를 로드한 후 이미지를 메모리에 유지합니다. 이렇게 하면 모델을 훈련하는 동안 데이터세트가 병목 상태가 되지 않습니다. 데이터세트가 너무 커서 메모리에 맞지 않는 경우, 이 메서드를 사용하여 성능이 높은 온디스크 캐시를 생성할 수도 있습니다.`.prefetch()`는 훈련 중에 데이터 전처리 및 모델 실행과 겹칩니다.관심 있는 독자는 [데이터 성능 가이드](https://www.tensorflow.org/guide/data_performanceprefetching)에서 두 가지 메서드와 디스크에 데이터를 캐시하는 방법에 대해 자세히 알아볼 수 있습니다.
###Code
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
모델 훈련하기완전성을 위해 준비한 데이터세트를 사용하여 간단한 모델을 훈련하는 방법을 보여줍니다. 이 모델은 어떤 식으로든 조정되지 않았습니다. 목표는 방금 만든 데이터세트를 사용하여 역학을 보여주는 것입니다. 이미지 분류에 대한 자세한 내용은 이 [튜토리얼](https://www.tensorflow.org/tutorials/images/classification)을 참조하세요.
###Code
num_classes = 5
model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Rescaling(1./255),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(num_classes)
])
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
참고: 몇 개의 epoch에 대해서만 훈련하므로 이 튜토리얼은 빠르게 진행됩니다.
###Code
model.fit(
train_ds,
validation_data=val_ds,
epochs=3
)
###Output
_____no_output_____
###Markdown
참고: `model.fit`을 사용하는 대신 사용자 정의 훈련 루프를 작성할 수도 있습니다. 자세한 내용은 이 [튜토리얼](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch)을 참조하세요. 검증 정확도가 훈련 정확도에 비해 낮으므로 모델이 과대적합되었음을 알 수 있습니다. 이 [튜토리얼](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit)에서 과대적합 및 이를 줄이는 방법에 대해 자세히 알아볼 수 있습니다. 미세 제어를 위해 `tf.data` 사용하기 위의 `tf.keras.preprocessing` 유틸리티는 이미지의 디렉토리에서 `tf.data.Dataset`을 작성하는 편리한 방법입니다. 보다 세밀한 제어를 위해 `tf.data`을 사용하여 자체 입력 파이프라인을 작성할 수 있습니다. 이 섹션에서는 이전에 다운로드한 TGZ 파일의 파일 경로부터 시작하여 이를 수행하는 방법을 보여줍니다.
###Code
list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*'), shuffle=False)
list_ds = list_ds.shuffle(image_count, reshuffle_each_iteration=False)
for f in list_ds.take(5):
print(f.numpy())
###Output
_____no_output_____
###Markdown
파일의 트리 구조를 사용하여 `class_names` 목록을 컴파일할 수 있습니다.
###Code
class_names = np.array(sorted([item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"]))
print(class_names)
###Output
_____no_output_____
###Markdown
데이터세트를 훈련 및 검증으로 분할합니다.
###Code
val_size = int(image_count * 0.2)
train_ds = list_ds.skip(val_size)
val_ds = list_ds.take(val_size)
###Output
_____no_output_____
###Markdown
다음과 같이 각 데이터세트의 길이를 볼 수 있습니다.
###Code
print(tf.data.experimental.cardinality(train_ds).numpy())
print(tf.data.experimental.cardinality(val_ds).numpy())
###Output
_____no_output_____
###Markdown
파일 경로를 `(img, label)` 쌍으로 변환하는 간단한 함수를 작성합니다.
###Code
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
one_hot = parts[-2] == class_names
# Integer encode the label
return tf.argmax(one_hot)
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.io.decode_jpeg(img, channels=3)
# resize the image to the desired size
return tf.image.resize(img, [img_height, img_width])
def process_path(file_path):
label = get_label(file_path)
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
###Output
_____no_output_____
###Markdown
`Dataset.map`을 사용하여 `image, label` 쌍의 데이터세트를 작성합니다.
###Code
# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
train_ds = train_ds.map(process_path, num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(process_path, num_parallel_calls=AUTOTUNE)
for image, label in train_ds.take(1):
print("Image shape: ", image.numpy().shape)
print("Label: ", label.numpy())
###Output
_____no_output_____
###Markdown
성능을 위한 데이터세트 구성하기 이 데이터세트로 모델을 훈련하려면 데이터에 대해 다음이 필요합니다.- 잘 섞는다.- 배치 처리한다.- 가능한 빨리 배치를 사용할 수 있어야 한다.이러한 기능은 `tf.data` API를 사용하여 추가할 수 있습니다. 자세한 내용은 [입력 파이프라인 성능](../../guide/performance/datasets) 가이드를 참조하세요.
###Code
def configure_for_performance(ds):
ds = ds.cache()
ds = ds.shuffle(buffer_size=1000)
ds = ds.batch(batch_size)
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
###Output
_____no_output_____
###Markdown
데이터 시각화하기이 데이터세트를 이전에 작성한 데이터세트와 유사하게 시각화할 수 있습니다.
###Code
image_batch, label_batch = next(iter(train_ds))
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image_batch[i].numpy().astype("uint8"))
label = label_batch[i]
plt.title(class_names[label])
plt.axis("off")
###Output
_____no_output_____
###Markdown
모델 계속 훈련하기위의 `keras.preprocessing`에 의해 작성된 것과 유사한 `tf.data.Dataset`를 수동으로 빌드했습니다. 이것으로 모델 훈련을 계속할 수 있습니다. 이전과 마찬가지로 실행 시간을 짧게 유지하기 위해 몇 개의 epoch 동안만 훈련합니다.
###Code
model.fit(
train_ds,
validation_data=val_ds,
epochs=3
)
###Output
_____no_output_____
###Markdown
TensorFlow 데이터세트 사용하기이 튜토리얼에서는 지금까지 디스크에서 데이터를 로드하는 데 중점을 두었습니다. [TensorFlow 데이터세트](https://www.tensorflow.org/datasets)에서 다운로드하기 쉬운 대규모 데이터세트 [카탈로그](https://www.tensorflow.org/datasets)를 탐색하여 사용할 데이터세트를 찾을 수도 있습니다. 이전에 Flowers 데이터세트를 디스크에서 로드했으므로 TensorFlow 데이터세트로 가져오는 방법을 살펴보겠습니다. TensorFlow 데이터세트를 사용하여 꽃 [데이터세트](https://www.tensorflow.org/datasets/catalog/tf_flowers)를 다운로드합니다.
###Code
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
###Output
_____no_output_____
###Markdown
꽃 데이터세트에는 5개의 클래스가 있습니다.
###Code
num_classes = metadata.features['label'].num_classes
print(num_classes)
###Output
_____no_output_____
###Markdown
데이터세트에서 이미지를 검색합니다.
###Code
get_label_name = metadata.features['label'].int2str
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
###Output
_____no_output_____
###Markdown
이전과 마찬가지로, 성능을 위해 각 데이터세트를 일괄 처리, 셔플 및 구성해야 합니다.
###Code
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
test_ds = configure_for_performance(test_ds)
###Output
_____no_output_____
|
Intensity Free.ipynb
|
###Markdown
Intensity FreeA modification of the original intensity free script to provide simulation of new data points and culling of input data points, in line with modelling the RHS of the split distribution LicenceBSD 3-Clause LicenseCopyright (c) 2020, Cyber Security Research Centre LimitedAll rights reserved.Redistribution and use in source and binary forms, with or withoutmodification, are permitted provided that the following conditions are met:1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THEIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AREDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLEFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIALDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS ORSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVERCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USEOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Preconfiguration/Setup
###Code
#files
modelSaveLoc = "/ifl-tpp/model.pt" #where to save the learned model
distSaveLoc = f"/ifl-tpp/data/synth.npz" #where to save the simulated distribution
backendCodeLoc = "/ifl-tpp/code"
#tweakables
RHS_only = True #Model only the RHS of the distribution. Only works with one input file.
min_val = 0.00025 #all interarrivals less than this will be culled if RHS_only is true (this is the split location)
#scripts for formatting the data to be used by this script. Expects the original data to be in 'data.csv'
#this can be changed inside package_data.py
#!cd "/ifl-tpp/"; python package_data.py 4 12.5 128
#!cd "/ifl-tpp/"; python package_data.py 1 45 128
import sys
sys.path.append(backendCodeLoc)
import dpp
import numpy as np
import torch
import torch.nn as nn
import torch.distributions as td
from copy import deepcopy
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
torch.set_default_tensor_type(torch.cuda.FloatTensor)
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
ConfigChange the values bellow to train on other datasets / with other models.
###Code
seed = 1
np.random.seed(seed)
torch.manual_seed(seed)
## General data config
dataset_name = '4-raw' # the name of the dataset to use, expects it in the 'data' folder
split = 'whole_sequences' # How to split the sequences (other 'each_sequence' -- split every seq. into train/val/test)
## General model config
use_history = True # Whether to use RNN to encode history
history_size = 64 # Size of the RNN hidden vector
rnn_type = 'LSTM' # Which RNN cell to use (other: ['GRU', 'LSTM'])
use_embedding = False # Whether to use sequence embedding (should use with 'each_sequence' split)
embedding_size = 64 # Size of the sequence embedding vector
# IMPORTANT: when using split = 'whole_sequences', the model will only learn embeddings
# for the training sequences, and not for validation / test
trainable_affine = False # Train the final affine layer
## Decoder config
decoder_name = 'LogNormMix' # other: ['RMTPP', 'FullyNeuralNet', 'Exponential', 'SOSPolynomial', 'DeepSigmoidalFlow']
n_components = 32 # Number of components for a mixture model
hypernet_hidden_sizes = [] # Number of units in MLP generating parameters ([] -- affine layer, [64] -- one layer, etc.)
## Flow params
# Polynomial
max_degree = 3 # Maximum degree value for Sum-of-squares polynomial flow (SOS)
n_terms = 4 # Number of terms for SOS flow
# DSF / FullyNN
n_layers = 2 # Number of layers for Deep Sigmoidal Flow (DSF) / Fully Neural Network flow (Omi et al., 2019)
layer_size = 64 # Number of mixture components / units in a layer for DSF and FullyNN
## Training config
regularization = 1e-5 # L2 regularization parameter
learning_rate = 1e-3 # Learning rate for Adam optimizer
max_epochs = 5000 # For how many epochs to train
display_step = 50 # Display training statistics after every display_step
patience = 50 # After how many consecutive epochs without improvement of val loss to stop training
###Output
_____no_output_____
###Markdown
Data- Load dataset- Split into training / validation / test set- Normalize input inter-event times- Break down long traning set sequences
###Code
if '+' not in dataset_name:
if (RHS_only):
dataset = dpp.data.load_dataset_min(dataset_name, min_val=min_val)
else:
dataset = dpp.data.load_dataset(dataset_name)
else:
# If '+' in dataset_name, load all the datasets together and concatenate them
# For example, dataset_name='synth/poisson+synth/renewal' loads poisson and renewal datasets
dataset_names = [d.strip() for d in dataset_name.split('+')]
dataset = dpp.data.load_dataset(dataset_names.pop(0))
for d in dataset_names:
dataset += dpp.data.load_dataset(dataset_names.pop(0))
# Split into train/val/test, on each sequence or assign whole sequences to different sets
if split == 'each_sequence':
d_train, d_val, d_test = dataset.train_val_test_split_each(seed=seed)
elif split == 'whole_sequences':
d_train, d_val, d_test = dataset.train_val_test_split_whole(seed=seed)
else:
raise ValueError(f'Unsupported dataset split {split}')
# Calculate mean and std of the input inter-event times and normalize only input
mean_in_train, std_in_train = d_train.get_mean_std_in()
std_out_train = 1.0
d_train.normalize(mean_in_train, std_in_train, std_out_train)
d_val.normalize(mean_in_train, std_in_train, std_out_train)
d_test.normalize(mean_in_train, std_in_train, std_out_train)
# Break down long train sequences for faster batch traning and create torch DataLoaders
d_train.break_down_long_sequences(128)
collate = dpp.data.collate
dl_train = torch.utils.data.DataLoader(d_train, batch_size=64, shuffle=True, collate_fn=collate)
dl_val = torch.utils.data.DataLoader(d_val, batch_size=1, shuffle=False, collate_fn=collate)
dl_test = torch.utils.data.DataLoader(d_test, batch_size=1, shuffle=False, collate_fn=collate)
# Set the parameters for affine normalization layer depending on the decoder (see Appendix D.3 in the paper)
if decoder_name in ['RMTPP', 'FullyNeuralNet', 'Exponential']:
_, std_out_train = d_train.get_mean_std_out()
mean_out_train = 0.0
else:
mean_out_train, std_out_train = d_train.get_log_mean_std_out()
###Output
Loading data...
0.00039005299913696945
###Markdown
Model setup- Define the model config- Define the optimizer
###Code
# General model config
general_config = dpp.model.ModelConfig(
use_history=use_history,
history_size=history_size,
rnn_type=rnn_type,
use_embedding=use_embedding,
embedding_size=embedding_size,
num_embeddings=len(dataset),
)
# Decoder specific config
decoder = getattr(dpp.decoders, decoder_name)(general_config,
n_components=n_components,
hypernet_hidden_sizes=hypernet_hidden_sizes,
max_degree=max_degree,
n_terms=n_terms,
n_layers=n_layers,
layer_size=layer_size,
shift_init=mean_out_train,
scale_init=std_out_train,
trainable_affine=trainable_affine)
# Define model
model = dpp.model.Model(general_config, decoder)
model.use_history(general_config.use_history)
model.use_embedding(general_config.use_embedding)
# Define optimizer
opt = torch.optim.Adam(model.parameters(), weight_decay=regularization, lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training- Run for max_epochs or until the early stopping condition is satisfied- Calculate and save the training statistics
###Code
# Function that calculates the loss for the entire dataloader
def get_total_loss(loader):
loader_log_prob, loader_lengths = [], []
for input in loader:
loader_log_prob.append(model.log_prob(input).detach())
loader_lengths.append(input.length.detach())
return -model.aggregate(loader_log_prob, loader_lengths)
impatient = 0
best_loss = np.inf
best_model = deepcopy(model.state_dict())
training_val_losses = []
for epoch in range(max_epochs):
model.train()
for input in dl_train:
opt.zero_grad()
log_prob = model.log_prob(input)
loss = -model.aggregate(log_prob, input.length)
loss.backward()
opt.step()
model.eval()
loss_val = get_total_loss(dl_val)
training_val_losses.append(loss_val.item())
if (best_loss - loss_val) < 1e-4:
impatient += 1
if loss_val < best_loss:
best_loss = loss_val.item()
best_model = deepcopy(model.state_dict())
else:
best_loss = loss_val.item()
best_model = deepcopy(model.state_dict())
impatient = 0
if impatient >= patience:
print(f'Breaking due to early stopping at epoch {epoch}')
break
if (epoch + 1) % display_step == 0:
print(f"Epoch {epoch+1:4d}, loss_train_last_batch = {loss:.4f}, loss_val = {loss_val:.4f}")
###Output
Epoch 50, loss_train_last_batch = -2.1351, loss_val = -2.4847
Epoch 100, loss_train_last_batch = -3.1897, loss_val = -2.5670
Epoch 150, loss_train_last_batch = -2.7778, loss_val = -2.6492
Epoch 200, loss_train_last_batch = -2.5979, loss_val = -2.6939
Epoch 250, loss_train_last_batch = -4.4496, loss_val = -2.7043
Epoch 300, loss_train_last_batch = -3.6371, loss_val = -2.7804
Epoch 350, loss_train_last_batch = -3.4573, loss_val = -2.8355
Epoch 400, loss_train_last_batch = -2.7067, loss_val = -2.8411
Epoch 450, loss_train_last_batch = -2.3216, loss_val = -2.8692
Epoch 500, loss_train_last_batch = -2.3602, loss_val = -2.8710
Epoch 550, loss_train_last_batch = -3.0704, loss_val = -2.8668
Epoch 600, loss_train_last_batch = -3.5139, loss_val = -2.8967
Epoch 650, loss_train_last_batch = -3.5007, loss_val = -2.9034
Epoch 700, loss_train_last_batch = -3.6317, loss_val = -2.9141
Epoch 750, loss_train_last_batch = -2.8044, loss_val = -2.9124
Breaking due to early stopping at epoch 759
###Markdown
Evaluation- Load the best model- Calculate the train/val/test loss- Plot the training curve
###Code
model.load_state_dict(best_model)
model.eval()
torch.save(model.state_dict(), modelSaveLoc) #save the model
pdf_loss_train = get_total_loss(dl_train)
pdf_loss_val = get_total_loss(dl_val)
pdf_loss_test = get_total_loss(dl_test)
print(f'Time NLL\n'
f'Train: {pdf_loss_train:.4f}\n'
f'Val: {pdf_loss_val.item():.4f}\n'
f'Test: {pdf_loss_test.item():.4f}')
training_val_losses = training_val_losses[:-patience] # plot only until early stopping
plt.plot(range(len(training_val_losses)), training_val_losses)
plt.ylabel('Validation loss')
plt.xlabel('Epoch')
plt.title(f'Training on "{dataset_name}" dataset')
plt.show()
###Output
_____no_output_____
###Markdown
Simulation (Added)
###Code
#this simulation loop is very slow, so prepare to wait if generating any significant number of new points
#load the best model
model = dpp.model.Model(general_config, decoder)
model.use_history(general_config.use_history)
model.use_embedding(general_config.use_embedding)
model.load_state_dict(torch.load(modelSaveLoc))
model.eval()
sec = 1000000000 #ns
#tweakable parameters
stop = 179000 #how many points to simmulate
upper_limit = 36 #the maximum interarrival time allowed (anything greater is culled)
#do the input loop
#get a length of data to use as the starting history input
#format it so it can understand it
#do the prediction and transformations
#feed the result back in as extra history
#data to simulate, this should be the same as what was used to train the model
f = np.load(f"/ifl-tpp/data/{dataset_name}.npz", allow_pickle=True)
#prepare data for simulation
deltas = np.ediff1d(np.concatenate(f["arrival_times"]))
deltas = deltas[deltas >= min_val]
chunkSize = 128
#resplit
deltas = np.asarray([list(np.concatenate([[1.0],deltas[x+1:x+chunkSize]])) for x in range(0, len(deltas), chunkSize)])
history_input = [deltas[-10].copy()]
new_points = []
i = 0
oh_no = 0 #problems
while (i < stop):
#print(history_input)
d_hist = dpp.data.SequenceDataset(delta_times=history_input, log_mode=True)
d_hist.normalize(mean_in_train, std_in_train, std_out_train)
dl_hist = torch.utils.data.DataLoader(d_hist, batch_size=1, shuffle=False, collate_fn=collate)
for input in dl_hist:
#print(input)
predictionl = model.predict(input).detach()
break
#data transformation process
#when loaded, turned into log via natural logarithm
#before training, data is normalised
#training is performed
#predict new points given history (needs to be logged and normalised [using same normalisation parameters])
#given a list of points (most likely each point is using the last n points as history)
prediction = predictionl[0]
#reverse the normalisation => self.in_times = [(t - mean_in) / std_in for t in self.in_times]
#reverse the log
#from the top => mean_in_train, std_in_train
delta = ((prediction * std_in_train) + mean_in_train).exp().cpu().numpy().item() #the end bit removes it from the gpu
#if the generated point is outside reasonable bounds (cull anything less than 1ns and > max time)
if (delta < 0.1/sec or delta > upper_limit):
#print(f"oh no: {delta}")
oh_no += 1
else:
#add on new point
new_points.append(delta)
history_input[0] = np.roll(history_input[0],-1)
history_input[0][-1] = delta
i += 1
if (i % 200 == 0):
print(f"\r{' '*20}\r{100*i/stop}%", flush=True, end='')
new_points = np.array(new_points)
print(f"\Culled Points: {oh_no} ({100/stop * oh_no}%)")
#Save the generated points
np.savez(modelSaveLoc, deltas=new_points)
###Output
_____no_output_____
|
courses/machine_learning/deepdive/04_advanced_preprocessing/taxicab_traffic/deploy.ipynb
|
###Markdown
Deploy for Online PredictionTo get our predictions, in addition to the features provided by the client, we also need to fetch the latest traffic information from BigQuery. We then combine these and invoke our tensorflow model. This is visualized by the 'on-demand' portion (red arrows) in the below diagram:To do this we'll take advantage of [AI Platforms Custom Prediction Routines](https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routines) which allows us to execute custom python code in response to every online prediction request. There are 5 steps to creating a custom prediction routine:1. Upload Model Artifacts to GCS2. Implement Predictor interface 3. Package the prediction code and dependencies4. Deploy5. Invoke API 1. Upload Model Artifacts to GCSHere we upload our model weights so that AI Platform can access them.
###Code
!gsutil cp -r $MODEL_PATH/* gs://$BUCKET/taxifare/model/
###Output
_____no_output_____
###Markdown
2. Implement Predictor InterfaceInterface Spec: https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routinespredictor-classThis tells AI Platform how to load the model artifacts, and is where we specify our custom prediction code.Note: the correct PROJECT_ID will automatically be inserted using the bash `sed` command in the subsequent cell.
###Code
%%writefile predictor.py
import tensorflow as tf
from google.cloud import bigquery
PROJECT_ID = 'will_be_replaced'
class TaxifarePredictor(object):
def __init__(self, predict_fn):
self.predict_fn = predict_fn
def predict(self, instances, **kwargs):
bq = bigquery.Client(PROJECT_ID)
query_string = """
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 1
"""
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instances['trips_last_5min'] = [trips for _ in range(len(list(instances.items())[0][1]))]
predictions = self.predict_fn(instances)
return predictions['predictions'].tolist() # convert to list so it is JSON serialiable (requirement)
@classmethod
def from_path(cls, model_dir):
predict_fn = tf.contrib.predictor.from_saved_model(model_dir,'predict')
return cls(predict_fn)
!sed -i -e 's/will_be_replaced/{PROJECT_ID}/g' predictor.py
###Output
_____no_output_____
###Markdown
Test Predictor Class Works Locally
###Code
import predictor
instances = {'dayofweek' : [6,5],
'hourofday' : [12,11],
'pickuplon' : [-73.99,-73.99],
'pickuplat' : [40.758,40.758],
'dropofflat' : [40.742,40.758],
'dropofflon' : [-73.97,-73.97]}
predictor = predictor.TaxifarePredictor.from_path(MODEL_PATH)
predictor.predict(instances)
###Output
_____no_output_____
###Markdown
3. Package Predictor Class and DependenciesWe must package the predictor as a tar.gz source distribution package. Instructions for this are specified [here](http://cloud.google.com/ml-engine/docs/custom-prediction-routinespredictor-tarball). The AI Platform runtime comes preinstalled with several packages [listed here](https://cloud.google.com/ml-engine/docs/tensorflow/runtime-version-list). However it does not come with `google-cloud-bigquery` so we list that as a dependency below.
###Code
%%writefile setup.py
from setuptools import setup
setup(
name='taxifare_custom_predict_code',
version='0.1',
scripts=['predictor.py'],
install_requires=[
'google-cloud-bigquery==1.16.0',
])
!python setup.py sdist --formats=gztar
!gsutil cp dist/taxifare_custom_predict_code-0.1.tar.gz gs://$BUCKET/taxifare/predict_code/
###Output
_____no_output_____
###Markdown
4. DeployThis is similar to how we deploy standard models to AI Platform, with a few extra command line arguments.Note the use of the `--service-acount` parameter below.The default service account does not have permissions to read from BigQuery, so we [specify a different service account](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-modelsservice-account) that does have permission.Specifically we use the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accountscompute_engine_default_service_account) which has the IAM project editor role.
###Code
!gcloud beta ai-platform models create $MODEL_NAME --regions us-central1 --enable-logging --enable-console-logging
#!gcloud ai-platform versions delete $VERSION_NAME --model taxifare --quiet
!gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--origin gs://$BUCKET/taxifare/model \
--service-account $(gcloud projects list --filter="$PROJECT_ID" --format="value(PROJECT_NUMBER)")[email protected] \
--runtime-version 1.14 \
--python-version 3.5 \
--package-uris gs://$BUCKET/taxifare/predict_code/taxifare_custom_predict_code-0.1.tar.gz \
--prediction-class predictor.TaxifarePredictor
###Output
_____no_output_____
###Markdown
5. Invoke API **Warning:** You will see `ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth` when you run this. While it looks like an error this is actually just a warning and is safe to ignore, the subsequent cell will still work.
###Code
import googleapiclient.discovery
instances = {'dayofweek' : [6],
'hourofday' : [12],
'pickuplon' : [-73.99],
'pickuplat' : [40.758],
'dropofflat' : [40.742],
'dropofflon' : [-73.97]}
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, VERSION_NAME)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'])
###Output
_____no_output_____
###Markdown
Deploy for Online PredictionTo get our predictions, in addition to the features provided by the client, we also need to fetch the latest traffic information from BigQuery. We then combine these and invoke our tensorflow model. This is visualized by the 'on-demand' portion (red arrows) in the below diagram:To do this we'll take advantage of [AI Platforms Custom Prediction Routines](https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routines) which allows us to execute custom python code in response to every online prediction request. There are 5 steps to creating a custom prediction routine:1. Upload Model Artifacts to GCS2. Implement Predictor interface 3. Package the prediction code and dependencies4. Deploy5. Invoke API 1. Upload Model Artifacts to GCSHere we upload our model weights so that AI Platform can access them.
###Code
!gsutil cp -r $MODEL_PATH/* gs://$BUCKET/taxifare/model/
###Output
_____no_output_____
###Markdown
2. Implement Predictor InterfaceInterface Spec: https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routinespredictor-classThis tells AI Platform how to load the model artifacts, and is where we specify our custom prediction code.Note: the correct PROJECT_ID will automatically be inserted using the bash `sed` command in the subsequent cell.
###Code
%%writefile predictor.py
import tensorflow as tf
from google.cloud import bigquery
PROJECT_ID = 'will_be_replaced'
class TaxifarePredictor(object):
def __init__(self, predict_fn):
self.predict_fn = predict_fn
def predict(self, instances, **kwargs):
bq = bigquery.Client(PROJECT_ID)
query_string = """
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 1
"""
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instances['trips_last_5min'] = [trips for _ in range(len(list(instances.items())[0][1]))]
predictions = self.predict_fn(instances)
return predictions['predictions'].tolist() # convert to list so it is JSON serialiable (requirement)
@classmethod
def from_path(cls, model_dir):
predict_fn = tf.contrib.predictor.from_saved_model(model_dir,'predict')
return cls(predict_fn)
!sed -i -e 's/will_be_replaced/{PROJECT_ID}/g' predictor.py
###Output
_____no_output_____
###Markdown
Test Predictor Class Works Locally
###Code
import predictor
instances = {'dayofweek' : [6,5],
'hourofday' : [12,11],
'pickuplon' : [-73.99,-73.99],
'pickuplat' : [40.758,40.758],
'dropofflat' : [40.742,40.758],
'dropofflon' : [-73.97,-73.97]}
predictor = predictor.TaxifarePredictor.from_path(MODEL_PATH)
predictor.predict(instances)
###Output
_____no_output_____
###Markdown
3. Package Predictor Class and DependenciesWe must package the predictor as a tar.gz source distribution package. Instructions for this are specified [here](http://cloud.google.com/ml-engine/docs/custom-prediction-routinespredictor-tarball). The AI Platform runtime comes preinstalled with several packages [listed here](https://cloud.google.com/ml-engine/docs/tensorflow/runtime-version-list). However it does not come with `google-cloud-bigquery` so we list that as a dependency below.
###Code
%%writefile setup.py
from setuptools import setup
setup(
name='taxifare_custom_predict_code',
version='0.1',
scripts=['predictor.py'],
install_requires=[
'google-cloud-bigquery==1.16.0',
])
!python setup.py sdist --formats=gztar
!gsutil cp dist/taxifare_custom_predict_code-0.1.tar.gz gs://$BUCKET/taxifare/predict_code/
###Output
_____no_output_____
###Markdown
4. DeployThis is similar to how we deploy standard models to AI Platform, with a few extra command line arguments.Note the use of the `--service-acount` parameter below.The default service account does not have permissions to read from BigQuery, so we [specify a different service account](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-modelsservice-account) that does have permission.Specifically we use the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accountscompute_engine_default_service_account) which has the IAM project editor role.
###Code
!gcloud beta ai-platform models create $MODEL_NAME --regions us-central1 --enable-logging --enable-console-logging
#!gcloud ai-platform versions delete $VERSION_NAME --model taxifare --quiet
!gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--origin gs://$BUCKET/taxifare/model \
--service-account $(gcloud projects list --filter="$PROJECT_ID" --format="value(PROJECT_NUMBER)")[email protected] \
--runtime-version 1.14 \
--python-version 3.5 \
--package-uris gs://$BUCKET/taxifare/predict_code/taxifare_custom_predict_code-0.1.tar.gz \
--prediction-class predictor.TaxifarePredictor
###Output
_____no_output_____
###Markdown
5. Invoke API **Warning:** You will see `ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth` when you run this. While it looks like an error this is actually just a warning and is safe to ignore, the subsequent cell will still work.
###Code
import googleapiclient.discovery
instances = {'dayofweek' : [6],
'hourofday' : [12],
'pickuplon' : [-73.99],
'pickuplat' : [40.758],
'dropofflat' : [40.742],
'dropofflon' : [-73.97]}
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, VERSION_NAME)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'])
###Output
_____no_output_____
|
notes/bashcsv.ipynb
|
###Markdown
CSV command-line kung fu**TODO**: Convert to using [csvkit](https://csvkit.readthedocs.io/en/1.0.2/) Much better!You might be surprised how much data slicing and dicing you can do from the command line using some simple tools and I/O redirection + piping. (See [A Quick Introduction to Pipes and Redirection](http://bconnelly.net/working-with-csvs-on-the-command-line/a-quick-introduction-to-pipes-and-redirection)). We've already seen I/O redirection where we took the output of a command and wrote it to a file (`/tmp/t.csv`): ```bash$ iconv -c -f utf-8 -t ascii SampleSuperstoreSales.csv > /tmp/t.csv``` Extracting rowsNow, let me introduce you to the `grep` command that lets us filter the lines in a file according to a regular expression. Here's how to find all rows that contain `Annie Cyprus`:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv
###Output
249,1702,5/6/11,High,23,67.24,0.06,Regular Air,4.90,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent� Tank-Style Highlighters,Wrap Bag,0.54,5/7/11
669,4676,8/31/11,High,11,1210.0515,0.04,Regular Air,-104.25,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,9/1/11
670,4676,8/31/11,High,50,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,9/2/11
671,4676,8/31/11,High,3,49.59,0.07,Express Air,-8.38,12.28,6.47,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Paper,Xerox 1881,Small Box,0.38,9/2/11
672,4676,8/31/11,High,30,4253.009,0.01,Regular Air,1115.69,155.99,8.99,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,LX 788,Small Box,0.58,9/1/11
734,5284,7/8/11,Not Specified,7,59.38,0.1,Regular Air,-3.05,8.69,2.99,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Binders and Binder Accessories,"Cardinal Slant-D� Ring Binder, Heavy Gauge Vinyl",Small Box,0.39,7/10/11
3643,26051,6/27/10,Not Specified,22,795.74,0,Delivery Truck,-127.39,33.94,19.19,Annie Cyprus,Northwest Territories,Northwest Territories,Home Office,Furniture,Chairs & Chairmats,"Metal Folding Chairs, Beige, 4/Carton",Jumbo Drum,0.58,6/29/10
3644,26051,6/27/10,Not Specified,31,251.75,0.07,Regular Air,22.46,8.33,1.99,Annie Cyprus,Northwest Territories,Northwest Territories,Home Office,Technology,Computer Peripherals,"80 Minute Slim Jewel Case CD-R , 10/Pack - Staples",Small Pack,0.52,6/28/10
###Markdown
We didn't have to write any code. We didn't have to jump into a development environment or editor. We just asked for the sales for Annie. If we want to write the data to a file, we simply redirect it:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv > /tmp/Annie.csv
! head /tmp/Annie.csv # show first few lines of that new file
###Output
249,1702,5/6/11,High,23,67.24,0.06,Regular Air,4.90,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent� Tank-Style Highlighters,Wrap Bag,0.54,5/7/11
669,4676,8/31/11,High,11,1210.0515,0.04,Regular Air,-104.25,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,9/1/11
670,4676,8/31/11,High,50,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,9/2/11
671,4676,8/31/11,High,3,49.59,0.07,Express Air,-8.38,12.28,6.47,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Paper,Xerox 1881,Small Box,0.38,9/2/11
672,4676,8/31/11,High,30,4253.009,0.01,Regular Air,1115.69,155.99,8.99,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,LX 788,Small Box,0.58,9/1/11
734,5284,7/8/11,Not Specified,7,59.38,0.1,Regular Air,-3.05,8.69,2.99,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Binders and Binder Accessories,"Cardinal Slant-D� Ring Binder, Heavy Gauge Vinyl",Small Box,0.39,7/10/11
3643,26051,6/27/10,Not Specified,22,795.74,0,Delivery Truck,-127.39,33.94,19.19,Annie Cyprus,Northwest Territories,Northwest Territories,Home Office,Furniture,Chairs & Chairmats,"Metal Folding Chairs, Beige, 4/Carton",Jumbo Drum,0.58,6/29/10
3644,26051,6/27/10,Not Specified,31,251.75,0.07,Regular Air,22.46,8.33,1.99,Annie Cyprus,Northwest Territories,Northwest Territories,Home Office,Technology,Computer Peripherals,"80 Minute Slim Jewel Case CD-R , 10/Pack - Staples",Small Pack,0.52,6/28/10
###Markdown
If we want a specific row ID, we use a different regular expression that looks for a specific string at the left edge of a line (`^` means the beginning of a line):
###Code
! grep '^80,' data/SampleSuperstoreSales.csv
###Output
80,483,7/10/11,High,30,4965.7595,0.08,Regular Air,1198.97,195.99,3.99,Clay Rozendal,Nunavut,Nunavut,Corporate,Technology,Telephones and Communication,R380,Small Box,0.58,7/12/11
###Markdown
What if you want, say, two different rows added to another file? We do two `grep`s and a `>>` concatenation redirection:
###Code
! grep '^80,' data/SampleSuperstoreSales.csv > /tmp/two.csv # write first row
! grep '^160,' data/SampleSuperstoreSales.csv >> /tmp/two.csv # append second row
! cat /tmp/two.csv
###Output
80,483,7/10/11,High,30,4965.7595,0.08,Regular Air,1198.97,195.99,3.99,Clay Rozendal,Nunavut,Nunavut,Corporate,Technology,Telephones and Communication,R380,Small Box,0.58,7/12/11
160,995,5/30/11,Medium,46,1815.49,0.03,Regular Air,782.91,39.89,3.04,Neola Schneider,Nunavut,Nunavut,Home Office,Furniture,Office Furnishings,Ultra Commercial Grade Dual Valve Door Closer,Wrap Bag,0.53,5/31/11
###Markdown
If we'd like to see just the header row, we can use `head`:
###Code
! head -1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
###Markdown
If, on the other hand, we want to see everything but that row, we can use `tail` (which I pipe to `head` so then I see only the first two lines of output):
###Code
! tail +2 data/SampleSuperstoreSales.csv | head -2
###Output
1,3,10/13/10,Low,6,261.54,0.04,Regular Air,-213.25,38.94,35,Muhammed MacIntyre,Nunavut,Nunavut,Small Business,Office Supplies,Storage & Organization,"Eldon Base for stackable storage shelf, platinum",Large Box,0.8,10/20/10
49,293,10/1/12,High,49,10123.02,0.07,Delivery Truck,457.81,208.16,68.02,Barry French,Nunavut,Nunavut,Consumer,Office Supplies,Appliances,"1.7 Cubic Foot Compact ""Cube"" Office Refrigerators",Jumbo Drum,0.58,10/2/12
tail: stdout: Broken pipe
###Markdown
The output would normally be many thousands of lines here so I have *piped* the output to the `head` command to print just the first two rows. We can pipe many commands together, sending the output of one command as input to the next command. ExerciseCount how many sales items there are in the `Technology` product category that are also `High` order priorities? Hint: `wc -l` counts the number of lines.
###Code
! grep Technology, data/SampleSuperstoreSales.csv | grep High, | wc -l
###Output
449
###Markdown
Extracting columnsExtracting columns is also pretty easy as long as there is a single delimiter, such as a comma, that clearly separates the columns. For example, let's say we wanted to get the customer name column (which is 12th by my count).
###Code
! cut -d ',' -f 12 /tmp/t.csv | head -10
###Output
Customer Name
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
###Markdown
(Where I'm using the `iconv`erted `/tmp/t.csv` file as `cut` expects pure ascii.) Actually, hang on a second. We don't want the `Customer Name` header to appear in the list so we combine with the `tail` we just saw to strip the header.
###Code
! cut -d ',' -f 12 /tmp/t.csv | tail +2 | head -10
###Output
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
Dorothy Badders
tail: stdout: Broken pipe
###Markdown
What if we want a unique list? All we have to do is sort and then call `uniq`:
###Code
! cut -d ',' -f 12 /tmp/t.csv | tail +2 | sort | uniq | head -10
###Output
Aaron Bergman
Aaron Hawkins
Aaron Smayling
Adam Bellavance
Adam Hart
Adam Shillingsburg
Adrian Barton
Adrian Hane
Adrian Shami
Aimee Bixby
###Markdown
You can get multiple columns at once but the order is always left to right (`cut` does not know how to flip column order). For example, here is how to get the sales ID and the customer name together:
###Code
! cut -d ',' -f 2,12 /tmp/t.csv |head -10
###Output
Order ID,Customer Name
3,Muhammed MacIntyre
293,Barry French
293,Barry French
483,Clay Rozendal
515,Carlos Soltero
515,Carlos Soltero
613,Carl Jackson
613,Carl Jackson
643,Monica Federle
###Markdown
Naturally, we can write any of this output to a file using the `>` redirection operator. Let's do that and put each of those columns into a separate file and then `paste` them back with the customer name first.
###Code
! cut -d ',' -f 2 /tmp/t.csv > /tmp/IDs
! cut -d ',' -f 12 /tmp/t.csv > /tmp/names
! paste /tmp/names /tmp/IDs | head -10
###Output
Customer Name Order ID
Muhammed MacIntyre 3
Barry French 293
Barry French 293
Clay Rozendal 483
Carlos Soltero 515
Carlos Soltero 515
Carl Jackson 613
Carl Jackson 613
Monica Federle 643
###Markdown
Amazing, right?! This is often a very efficient means of manipulating data files because you are directly talking to the operating system instead of through Python libraries. We also don't have to write any code, we just have to know some syntax for terminal commands.Not impressed yet? Ok, how about creating a histogram indicating the number of sales per customer sorted in reverse numerical order? We've already got the list of customers and we can use an argument on `uniq` to get the count instead of just making a unique set. Then, we can use a second `sort` with arguments to reverse sort and use numeric rather than text-based sorting. This gives us a histogram:
###Code
! cut -d ',' -f 12 /tmp/t.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
41 Darren Budd
38 Ed Braxton
35 Brad Thomas
33 Carlos Soltero
30 Patrick Jones
29 Tony Sayre
28 Nora Price
28 Mark Cousins
28 Lena Creighton
28 Joy Smith
###Markdown
ExerciseModify the command so that you get a histogram of the shipping mode.
###Code
! cut -d ',' -f 8 /tmp/t.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
6270 Regular Air
1146 Delivery Truck
983 Express Air
###Markdown
CSV command-line kung fuYou might be surprised how much data slicing and dicing you can do from the command line using some simple tools and I/O redirection + piping. (See [A Quick Introduction to Pipes and Redirection](http://bconnelly.net/working-with-csvs-on-the-command-line/a-quick-introduction-to-pipes-and-redirection)). We've already seen I/O redirection where we took the output of a command and wrote it to a file (`/tmp/t.csv`): ```bash$ iconv -c -f utf-8 -t ascii SampleSuperstoreSales.csv > /tmp/t.csv``` Set up```bashpip install csvkit``` Extracting rows with grepNow, let me introduce you to the `grep` command that lets us filter the lines in a file according to a regular expression. Here's how to find all rows that contain `Annie Cyprus`:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv | head -3
###Output
249.0,1702.0,2011-05-06,High,23.0,67.24,0.06,Regular Air,4.9,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent™ Tank-Style Highlighters,Wrap Bag,0.54,2011-05-07
669.0,4676.0,2011-08-31,High,11.0,1210.0514999999998,0.04,Regular Air,-104.24700000000007,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,2011-09-01
670.0,4676.0,2011-08-31,High,50.0,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,2011-09-02
###Markdown
We didn't have to write any code. We didn't have to jump into a development environment or editor. We just asked for the sales for Annie. If we want to write the data to a file, we simply redirect it:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv > /tmp/Annie.csv
! head -3 /tmp/Annie.csv # show first 3 lines of that new file
###Output
249.0,1702.0,2011-05-06,High,23.0,67.24,0.06,Regular Air,4.9,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent™ Tank-Style Highlighters,Wrap Bag,0.54,2011-05-07
669.0,4676.0,2011-08-31,High,11.0,1210.0514999999998,0.04,Regular Air,-104.24700000000007,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,2011-09-01
670.0,4676.0,2011-08-31,High,50.0,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,2011-09-02
###Markdown
Filtering with csvgrep[csvkit](https://csvkit.readthedocs.io/en/1.0.3/) is an amazing package with lots of cool CSV utilities for use on the command line. `csvgrep` is one of them.If we want a specific row ID, then we need to use the more powerful `csvgrep` not just `grep`. We use a different regular expression that looks for a specific string at the left edge of a line (`^` means the beginning of a line, `$` means end of line or end of record):
###Code
! csvgrep -c 1 -r '^80$' -e latin1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
###Markdown
What if you want, say, two different rows added to another file? We do two `grep`s and a `>>` concatenation redirection:
###Code
! csvgrep -c 1 -r '^80$' -e latin1 data/SampleSuperstoreSales.csv > /tmp/two.csv # write first row
! csvgrep -c 1 -r '^160$' -e latin1 data/SampleSuperstoreSales.csv >> /tmp/two.csv # append second row
! cat /tmp/two.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
###Markdown
Beginning, end of files If we'd like to see just the header row, we can use `head`:
###Code
! head -1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
###Markdown
If, on the other hand, we want to see everything but that row, we can use `tail` (which I pipe to `head` so then I see only the first two lines of output):
###Code
! tail +2 data/SampleSuperstoreSales.csv | head -2
###Output
1.0,3.0,2010-10-13,Low,6.0,261.54,0.04,Regular Air,-213.25,38.94,35.0,Muhammed MacIntyre,Nunavut,Nunavut,Small Business,Office Supplies,Storage & Organization,"Eldon Base for stackable storage shelf, platinum",Large Box,0.8,2010-10-20
49.0,293.0,2012-10-01,High,49.0,10123.02,0.07,Delivery Truck,457.81,208.16,68.02,Barry French,Nunavut,Nunavut,Consumer,Office Supplies,Appliances,"1.7 Cubic Foot Compact ""Cube"" Office Refrigerators",Jumbo Drum,0.58,2012-10-02
tail: stdout: Broken pipe
###Markdown
The output would normally be many thousands of lines here so I have *piped* the output to the `head` command to print just the first two rows. We can pipe many commands together, sending the output of one command as input to the next command. ExerciseCount how many sales items there are in the `Technology` product category that are also `High` order priorities? Hint: `wc -l` counts the number of lines.
###Code
! grep Technology, data/SampleSuperstoreSales.csv | grep High, | wc -l
###Output
449
###Markdown
Extracting columns with csvcutExtracting columns is also pretty easy with `csvcut`. For example, let's say we wanted to get the customer name column (which is 12th by my count).
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | head -10
###Output
Customer Name
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
###Markdown
Actually, hang on a second. We don't want the `Customer Name` header to appear in the list so we combine with the `tail` we just saw to strip the header.
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | head -10
###Output
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
Dorothy Badders
tail: stdout: Broken pipe
###Markdown
What if we want a unique list? All we have to do is sort and then call `uniq`:
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq | head -10
###Output
Aaron Bergman
Aaron Hawkins
Aaron Smayling
Adam Bellavance
Adam Hart
Adam Shillingsburg
Adrian Barton
Adrian Hane
Adrian Shami
Aimee Bixby
###Markdown
You can get multiple columns at once in the order specified. For example, here is how to get the sales ID and the customer name together (name first then ID):
###Code
! csvcut -c 12,2 -e latin1 data/SampleSuperstoreSales.csv |head -10
###Output
Customer Name,Order ID
Muhammed MacIntyre,3.0
Barry French,293.0
Barry French,293.0
Clay Rozendal,483.0
Carlos Soltero,515.0
Carlos Soltero,515.0
Carl Jackson,613.0
Carl Jackson,613.0
Monica Federle,643.0
###Markdown
Naturally, we can write any of this output to a file using the `>` redirection operator. Let's do that and put each of those columns into a separate file and then `paste` them back with the customer name first.
###Code
! csvcut -c 2 -e latin1 data/SampleSuperstoreSales.csv > /tmp/IDs
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv > /tmp/names
! paste /tmp/names /tmp/IDs | head -10
###Output
Customer Name Order ID
Muhammed MacIntyre 3.0
Barry French 293.0
Barry French 293.0
Clay Rozendal 483.0
Carlos Soltero 515.0
Carlos Soltero 515.0
Carl Jackson 613.0
Carl Jackson 613.0
Monica Federle 643.0
###Markdown
Amazing, right?! This is often a very efficient means of manipulating data files because you are directly talking to the operating system instead of through Python libraries. We also don't have to write any code, we just have to know some syntax for terminal commands.Not impressed yet? Ok, how about creating a histogram indicating the number of sales per customer sorted in reverse numerical order? We've already got the list of customers and we can use an argument on `uniq` to get the count instead of just making a unique set. Then, we can use a second `sort` with arguments to reverse sort and use numeric rather than text-based sorting. This gives us a histogram:
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
41 Darren Budd
38 Ed Braxton
35 Brad Thomas
33 Carlos Soltero
30 Patrick Jones
29 Tony Sayre
28 Nora Price
28 Mark Cousins
28 Lena Creighton
28 Joy Smith
###Markdown
ExerciseModify the command so that you get a histogram of the shipping mode.
###Code
! csvcut -c 8 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
6270 Regular Air
1146 Delivery Truck
983 Express Air
###Markdown
CSV command-line kung fuYou might be surprised how much data slicing and dicing you can do from the command line using some simple tools and I/O redirection + piping. (See [A Quick Introduction to Pipes and Redirection](http://bconnelly.net/working-with-csvs-on-the-command-line/a-quick-introduction-to-pipes-and-redirection)). To motivate the use of command line, rather than just reading everything into Python, commandline tools will often process data much faster. (Although, In this case, we are using a Python based commandline tool.) Most importantly, you can launch many of these commands simultaneously from the commandline, computing everything in parallel using the multiple CPU core you have in your computer. If you have 4 core, you have the potential to process the data four times faster than a single-threaded Python program.We've already seen I/O redirection where we took the output of a command and wrote it to a file (`/tmp/t.csv`): ```bash$ iconv -c -f utf-8 -t ascii SampleSuperstoreSales.csv > /tmp/t.csv``` Set up```bashpip install csvkit``` Extracting rows with grepNow, let me introduce you to the `grep` command that lets us filter the lines in a file according to a regular expression. Here's how to find all rows that contain `Annie Cyprus`:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv | head -3
###Output
249,1702,5/6/11,High,23,67.24,0.06,Regular Air,4.90,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent™ Tank-Style Highlighters,Wrap Bag,0.54,5/7/11
669,4676,8/31/11,High,11,1210.0515,0.04,Regular Air,-104.25,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,9/1/11
670,4676,8/31/11,High,50,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,9/2/11
###Markdown
We didn't have to write any code. We didn't have to jump into a development environment or editor. We just asked for the sales for Annie. If we want to write the data to a file, we simply redirect it:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv > /tmp/Annie.csv
! head -3 /tmp/Annie.csv # show first 3 lines of that new file
###Output
249,1702,5/6/11,High,23,67.24,0.06,Regular Air,4.90,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent™ Tank-Style Highlighters,Wrap Bag,0.54,5/7/11
669,4676,8/31/11,High,11,1210.0515,0.04,Regular Air,-104.25,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,9/1/11
670,4676,8/31/11,High,50,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,9/2/11
###Markdown
Filtering with csvgrep[csvkit](https://csvkit.readthedocs.io/en/1.0.3/) is an amazing package with lots of cool CSV utilities for use on the command line. `csvgrep` is one of them.If we want a specific row ID, then we need to use the more powerful `csvgrep` not just `grep`. We use a different regular expression that looks for a specific string at the left edge of a line (`^` means the beginning of a line, `$` means end of line or end of record):
###Code
! csvgrep -c 1 -r '^80$' -e latin1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
80,483,7/10/11,High,30,4965.7595,0.08,Regular Air,1198.97,195.99,3.99,Clay Rozendal,Nunavut,Nunavut,Corporate,Technology,Telephones and Communication,R380,Small Box,0.58,7/12/11
###Markdown
What if you want, say, two different rows?
###Code
! csvgrep -c 1 -r '^(80|160)$' -e latin1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
80,483,7/10/11,High,30,4965.7595,0.08,Regular Air,1198.97,195.99,3.99,Clay Rozendal,Nunavut,Nunavut,Corporate,Technology,Telephones and Communication,R380,Small Box,0.58,7/12/11
160,995,5/30/11,Medium,46,1815.49,0.03,Regular Air,782.91,39.89,3.04,Neola Schneider,Nunavut,Nunavut,Home Office,Furniture,Office Furnishings,Ultra Commercial Grade Dual Valve Door Closer,Wrap Bag,0.53,5/31/11
###Markdown
Beginning, end of files If we'd like to see just the header row, we can use `head`:
###Code
! head -1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
###Markdown
If, on the other hand, we want to see everything but that row, we can use `tail` (which I pipe to `head` so then I see only the first two lines of output):
###Code
! tail +2 data/SampleSuperstoreSales.csv | head -2
###Output
1,3,10/13/10,Low,6,261.54,0.04,Regular Air,-213.25,38.94,35,Muhammed MacIntyre,Nunavut,Nunavut,Small Business,Office Supplies,Storage & Organization,"Eldon Base for stackable storage shelf, platinum",Large Box,0.8,10/20/10
49,293,10/1/12,High,49,10123.02,0.07,Delivery Truck,457.81,208.16,68.02,Barry French,Nunavut,Nunavut,Consumer,Office Supplies,Appliances,"1.7 Cubic Foot Compact ""Cube"" Office Refrigerators",Jumbo Drum,0.58,10/2/12
tail: stdout: Broken pipe
###Markdown
The output would normally be many thousands of lines here so I have *piped* the output to the `head` command to print just the first two rows. We can pipe many commands together, sending the output of one command as input to the next command. ExerciseCount how many sales items there are in the `Technology` product category that are also `High` order priorities? Hint: `wc -l` counts the number of lines.
###Code
! grep Technology, data/SampleSuperstoreSales.csv | grep High, | wc -l
###Output
449
###Markdown
Extracting columns with csvcutExtracting columns is also pretty easy with `csvcut`. For example, let's say we wanted to get the customer name column (which is 12th by my count).
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | head -10
###Output
Customer Name
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
###Markdown
Actually, hang on a second. We don't want the `Customer Name` header to appear in the list so we combine with the `tail` we just saw to strip the header.
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | head -10
###Output
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
Dorothy Badders
tail: stdout: Broken pipe
###Markdown
What if we want a unique list? All we have to do is sort and then call `uniq`:
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq | head -10
###Output
Aaron Bergman
Aaron Hawkins
Aaron Smayling
Adam Bellavance
Adam Hart
Adam Shillingsburg
Adrian Barton
Adrian Hane
Adrian Shami
Aimee Bixby
###Markdown
You can get multiple columns at once in the order specified. For example, here is how to get the sales ID and the customer name together (name first then ID):
###Code
! csvcut -c 12,2 -e latin1 data/SampleSuperstoreSales.csv |head -10
###Output
Customer Name,Order ID
Muhammed MacIntyre,3
Barry French,293
Barry French,293
Clay Rozendal,483
Carlos Soltero,515
Carlos Soltero,515
Carl Jackson,613
Carl Jackson,613
Monica Federle,643
###Markdown
Naturally, we can write any of this output to a file using the `>` redirection operator. Let's do that and put each of those columns into a separate file and then `paste` them back with the customer name first.
###Code
! csvcut -c 2 -e latin1 data/SampleSuperstoreSales.csv > /tmp/IDs
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv > /tmp/names
! paste /tmp/names /tmp/IDs | head -10
###Output
Customer Name Order ID
Muhammed MacIntyre 3
Barry French 293
Barry French 293
Clay Rozendal 483
Carlos Soltero 515
Carlos Soltero 515
Carl Jackson 613
Carl Jackson 613
Monica Federle 643
###Markdown
Amazing, right?! This is often a very efficient means of manipulating data files because you are directly talking to the operating system instead of through Python libraries. We also don't have to write any code, we just have to know some syntax for terminal commands.Not impressed yet? Ok, how about creating a histogram indicating the number of sales per customer sorted in reverse numerical order? We've already got the list of customers and we can use an argument on `uniq` to get the count instead of just making a unique set. Then, we can use a second `sort` with arguments to reverse sort and use numeric rather than text-based sorting. This gives us a histogram:
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
41 Darren Budd
38 Ed Braxton
35 Brad Thomas
33 Carlos Soltero
30 Patrick Jones
29 Tony Sayre
28 Nora Price
28 Mark Cousins
28 Lena Creighton
28 Joy Smith
###Markdown
ExerciseModify the command so that you get a histogram of the shipping mode.
###Code
! csvcut -c 8 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
6270 Regular Air
1146 Delivery Truck
983 Express Air
###Markdown
CSV command-line kung fuYou might be surprised how much data slicing and dicing you can do from the command line using some simple tools and I/O redirection + piping. (See [A Quick Introduction to Pipes and Redirection](http://bconnelly.net/working-with-csvs-on-the-command-line/a-quick-introduction-to-pipes-and-redirection)). We've already seen I/O redirection where we took the output of a command and wrote it to a file (`/tmp/t.csv`): ```bash$ iconv -c -f utf-8 -t ascii SampleSuperstoreSales.csv > /tmp/t.csv``` Set up```bashpip install csvkit``` Extracting rows with grepNow, let me introduce you to the `grep` command that lets us filter the lines in a file according to a regular expression. Here's how to find all rows that contain `Annie Cyprus`:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv | head -3
###Output
249,1702,5/6/11,High,23,67.24,0.06,Regular Air,4.90,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent� Tank-Style Highlighters,Wrap Bag,0.54,5/7/11
669,4676,8/31/11,High,11,1210.0515,0.04,Regular Air,-104.25,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,9/1/11
670,4676,8/31/11,High,50,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,9/2/11
###Markdown
We didn't have to write any code. We didn't have to jump into a development environment or editor. We just asked for the sales for Annie. If we want to write the data to a file, we simply redirect it:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv > /tmp/Annie.csv
! head -3 /tmp/Annie.csv # show first 3 lines of that new file
###Output
249,1702,5/6/11,High,23,67.24,0.06,Regular Air,4.90,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent� Tank-Style Highlighters,Wrap Bag,0.54,5/7/11
669,4676,8/31/11,High,11,1210.0515,0.04,Regular Air,-104.25,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,9/1/11
670,4676,8/31/11,High,50,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,9/2/11
###Markdown
Filtering with csvgrep[csvkit](https://csvkit.readthedocs.io/en/1.0.3/) is an amazing package with lots of cool CSV utilities for use on the command line. `csvgrep` is one of them.If we want a specific row ID, then we need to use the more powerful `csvgrep` not just `grep`. We use a different regular expression that looks for a specific string at the left edge of a line (`^` means the beginning of a line, `$` means end of line or end of record):
###Code
! csvgrep -c 1 -r '^80$' -e latin1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
80,483,7/10/11,High,30,4965.7595,0.08,Regular Air,1198.97,195.99,3.99,Clay Rozendal,Nunavut,Nunavut,Corporate,Technology,Telephones and Communication,R380,Small Box,0.58,7/12/11
###Markdown
What if you want, say, two different rows added to another file? We do two `grep`s and a `>>` concatenation redirection:
###Code
! csvgrep -c 1 -r '^80$' -e latin1 data/SampleSuperstoreSales.csv > /tmp/two.csv # write first row
! csvgrep -c 1 -r '^160$' -e latin1 data/SampleSuperstoreSales.csv >> /tmp/two.csv # append second row
! cat /tmp/two.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
80,483,7/10/11,High,30,4965.7595,0.08,Regular Air,1198.97,195.99,3.99,Clay Rozendal,Nunavut,Nunavut,Corporate,Technology,Telephones and Communication,R380,Small Box,0.58,7/12/11
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
160,995,5/30/11,Medium,46,1815.49,0.03,Regular Air,782.91,39.89,3.04,Neola Schneider,Nunavut,Nunavut,Home Office,Furniture,Office Furnishings,Ultra Commercial Grade Dual Valve Door Closer,Wrap Bag,0.53,5/31/11
###Markdown
Beginning, end of files If we'd like to see just the header row, we can use `head`:
###Code
! head -1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
###Markdown
If, on the other hand, we want to see everything but that row, we can use `tail` (which I pipe to `head` so then I see only the first two lines of output):
###Code
! tail +2 data/SampleSuperstoreSales.csv | head -2
###Output
1,3,10/13/10,Low,6,261.54,0.04,Regular Air,-213.25,38.94,35,Muhammed MacIntyre,Nunavut,Nunavut,Small Business,Office Supplies,Storage & Organization,"Eldon Base for stackable storage shelf, platinum",Large Box,0.8,10/20/10
49,293,10/1/12,High,49,10123.02,0.07,Delivery Truck,457.81,208.16,68.02,Barry French,Nunavut,Nunavut,Consumer,Office Supplies,Appliances,"1.7 Cubic Foot Compact ""Cube"" Office Refrigerators",Jumbo Drum,0.58,10/2/12
tail: stdout: Broken pipe
###Markdown
The output would normally be many thousands of lines here so I have *piped* the output to the `head` command to print just the first two rows. We can pipe many commands together, sending the output of one command as input to the next command. ExerciseCount how many sales items there are in the `Technology` product category that are also `High` order priorities? Hint: `wc -l` counts the number of lines.
###Code
! grep Technology, data/SampleSuperstoreSales.csv | grep High, | wc -l
###Output
449
###Markdown
Extracting columns with csvcutExtracting columns is also pretty easy with `csvcut`. For example, let's say we wanted to get the customer name column (which is 12th by my count).
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | head -10
###Output
Customer Name
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
###Markdown
Actually, hang on a second. We don't want the `Customer Name` header to appear in the list so we combine with the `tail` we just saw to strip the header.
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | head -10
###Output
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
Dorothy Badders
tail: stdout: Broken pipe
###Markdown
What if we want a unique list? All we have to do is sort and then call `uniq`:
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq | head -10
###Output
Aaron Bergman
Aaron Hawkins
Aaron Smayling
Adam Bellavance
Adam Hart
Adam Shillingsburg
Adrian Barton
Adrian Hane
Adrian Shami
Aimee Bixby
###Markdown
You can get multiple columns at once in the order specified. For example, here is how to get the sales ID and the customer name together (name first then ID):
###Code
! csvcut -c 12,2 -e latin1 data/SampleSuperstoreSales.csv |head -10
###Output
Customer Name,Order ID
Muhammed MacIntyre,3
Barry French,293
Barry French,293
Clay Rozendal,483
Carlos Soltero,515
Carlos Soltero,515
Carl Jackson,613
Carl Jackson,613
Monica Federle,643
###Markdown
Naturally, we can write any of this output to a file using the `>` redirection operator. Let's do that and put each of those columns into a separate file and then `paste` them back with the customer name first.
###Code
! csvcut -c 2 -e latin1 data/SampleSuperstoreSales.csv > /tmp/IDs
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv > /tmp/names
! paste /tmp/names /tmp/IDs | head -10
###Output
Customer Name Order ID
Muhammed MacIntyre 3
Barry French 293
Barry French 293
Clay Rozendal 483
Carlos Soltero 515
Carlos Soltero 515
Carl Jackson 613
Carl Jackson 613
Monica Federle 643
###Markdown
Amazing, right?! This is often a very efficient means of manipulating data files because you are directly talking to the operating system instead of through Python libraries. We also don't have to write any code, we just have to know some syntax for terminal commands.Not impressed yet? Ok, how about creating a histogram indicating the number of sales per customer sorted in reverse numerical order? We've already got the list of customers and we can use an argument on `uniq` to get the count instead of just making a unique set. Then, we can use a second `sort` with arguments to reverse sort and use numeric rather than text-based sorting. This gives us a histogram:
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
41 Darren Budd
38 Ed Braxton
35 Brad Thomas
33 Carlos Soltero
30 Patrick Jones
29 Tony Sayre
28 Nora Price
28 Mark Cousins
28 Lena Creighton
28 Joy Smith
###Markdown
ExerciseModify the command so that you get a histogram of the shipping mode.
###Code
! csvcut -c 8 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
6270 Regular Air
1146 Delivery Truck
983 Express Air
###Markdown
CSV command-line kung fuYou might be surprised how much data slicing and dicing you can do from the command line using some simple tools and I/O redirection + piping. (See [A Quick Introduction to Pipes and Redirection](http://bconnelly.net/working-with-csvs-on-the-command-line/a-quick-introduction-to-pipes-and-redirection)). To motivate the use of command line, rather than just reading everything into Python, commandline tools will often process data much faster. (Although, In this case, we are using a Python based commandline tool.) Most importantly, you can launch many of these commands simultaneously from the commandline, computing everything in parallel using the multiple CPU core you have in your computer. If you have 4 core, you have the potential to process the data four times faster than a single-threaded Python program. Stripping char beyond 255 from commandlineIf there are characters within the file that are non-ASCII and larger than 255, we can convert the file using the command line. Here's a simple version of the problem I put into file `/tmp/foo.html`:```htmlགྷ```I deliberately injected a Unicode code point > 255, which requires two bytes to store. Most of the characters require just one byte. Here is first part of file:```bash$ od -c -t xC /tmp/t.html0000000 \n \n གྷ ** 3c 68 74 6d 6c 3e 0a 3c 62 6f 64 79 3e 0a e0 bd...``` Here is how you could strip any non-one-byte characters from the file before processing:```bash$ iconv -c -f utf-8 -t ascii /tmp/foo.html ```We've already seen I/O redirection where we took the output of a command and wrote it to a file. We can do the same here: ```bash$ iconv -c -f utf-8 -t ascii /tmp/foo.html > /tmp/foo2.html``` Set up for CSV on commandline```bashpip install csvkit``` Extracting rows with grepNow, let me introduce you to the `grep` command that lets us filter the lines in a file according to a regular expression. Here's how to find all rows that contain `Annie Cyprus`:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv | head -3
###Output
249,1702,5/6/11,High,23,67.24,0.06,Regular Air,4.90,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent™ Tank-Style Highlighters,Wrap Bag,0.54,5/7/11
669,4676,8/31/11,High,11,1210.0515,0.04,Regular Air,-104.25,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,9/1/11
670,4676,8/31/11,High,50,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,9/2/11
###Markdown
We didn't have to write any code. We didn't have to jump into a development environment or editor. We just asked for the sales for Annie. If we want to write the data to a file, we simply redirect it:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv > /tmp/Annie.csv
! head -3 /tmp/Annie.csv # show first 3 lines of that new file
###Output
249,1702,5/6/11,High,23,67.24,0.06,Regular Air,4.90,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent™ Tank-Style Highlighters,Wrap Bag,0.54,5/7/11
669,4676,8/31/11,High,11,1210.0515,0.04,Regular Air,-104.25,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,9/1/11
670,4676,8/31/11,High,50,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,9/2/11
###Markdown
Filtering with csvgrep[csvkit](https://csvkit.readthedocs.io/en/1.0.3/) is an amazing package with lots of cool CSV utilities for use on the command line. `csvgrep` is one of them.If we want a specific row ID, then we need to use the more powerful `csvgrep` not just `grep`. We use a different regular expression that looks for a specific string at the left edge of a line (`^` means the beginning of a line, `$` means end of line or end of record):
###Code
! csvgrep -c 1 -r '^80$' -e latin1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
80,483,7/10/11,High,30,4965.7595,0.08,Regular Air,1198.97,195.99,3.99,Clay Rozendal,Nunavut,Nunavut,Corporate,Technology,Telephones and Communication,R380,Small Box,0.58,7/12/11
###Markdown
What if you want, say, two different rows?
###Code
! csvgrep -c 1 -r '^(80|160)$' -e latin1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
80,483,7/10/11,High,30,4965.7595,0.08,Regular Air,1198.97,195.99,3.99,Clay Rozendal,Nunavut,Nunavut,Corporate,Technology,Telephones and Communication,R380,Small Box,0.58,7/12/11
160,995,5/30/11,Medium,46,1815.49,0.03,Regular Air,782.91,39.89,3.04,Neola Schneider,Nunavut,Nunavut,Home Office,Furniture,Office Furnishings,Ultra Commercial Grade Dual Valve Door Closer,Wrap Bag,0.53,5/31/11
###Markdown
Beginning, end of files If we'd like to see just the header row, we can use `head`:
###Code
! head -1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
###Markdown
If, on the other hand, we want to see everything but that row, we can use `tail` (which I pipe to `head` so then I see only the first two lines of output):
###Code
! tail +2 data/SampleSuperstoreSales.csv | head -2
###Output
1,3,10/13/10,Low,6,261.54,0.04,Regular Air,-213.25,38.94,35,Muhammed MacIntyre,Nunavut,Nunavut,Small Business,Office Supplies,Storage & Organization,"Eldon Base for stackable storage shelf, platinum",Large Box,0.8,10/20/10
49,293,10/1/12,High,49,10123.02,0.07,Delivery Truck,457.81,208.16,68.02,Barry French,Nunavut,Nunavut,Consumer,Office Supplies,Appliances,"1.7 Cubic Foot Compact ""Cube"" Office Refrigerators",Jumbo Drum,0.58,10/2/12
tail: stdout: Broken pipe
###Markdown
The output would normally be many thousands of lines here so I have *piped* the output to the `head` command to print just the first two rows. We can pipe many commands together, sending the output of one command as input to the next command. ExerciseCount how many sales items there are in the `Technology` product category that are also `High` order priorities? Hint: `wc -l` counts the number of lines.
###Code
! grep Technology, data/SampleSuperstoreSales.csv | grep High, | wc -l
###Output
449
###Markdown
Extracting columns with csvcutExtracting columns is also pretty easy with `csvcut`. For example, let's say we wanted to get the customer name column (which is 12th by my count).
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | head -10
###Output
Customer Name
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
###Markdown
Actually, hang on a second. We don't want the `Customer Name` header to appear in the list so we combine with the `tail` we just saw to strip the header.
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | head -10
###Output
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
Dorothy Badders
tail: stdout: Broken pipe
###Markdown
What if we want a unique list? All we have to do is sort and then call `uniq`:
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq | head -10
###Output
Aaron Bergman
Aaron Hawkins
Aaron Smayling
Adam Bellavance
Adam Hart
Adam Shillingsburg
Adrian Barton
Adrian Hane
Adrian Shami
Aimee Bixby
###Markdown
You can get multiple columns at once in the order specified. For example, here is how to get the sales ID and the customer name together (name first then ID):
###Code
! csvcut -c 12,2 -e latin1 data/SampleSuperstoreSales.csv |head -10
###Output
Customer Name,Order ID
Muhammed MacIntyre,3
Barry French,293
Barry French,293
Clay Rozendal,483
Carlos Soltero,515
Carlos Soltero,515
Carl Jackson,613
Carl Jackson,613
Monica Federle,643
###Markdown
Naturally, we can write any of this output to a file using the `>` redirection operator. Let's do that and put each of those columns into a separate file and then `paste` them back with the customer name first.
###Code
! csvcut -c 2 -e latin1 data/SampleSuperstoreSales.csv > /tmp/IDs
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv > /tmp/names
! paste /tmp/names /tmp/IDs | head -10
###Output
Customer Name Order ID
Muhammed MacIntyre 3
Barry French 293
Barry French 293
Clay Rozendal 483
Carlos Soltero 515
Carlos Soltero 515
Carl Jackson 613
Carl Jackson 613
Monica Federle 643
###Markdown
Amazing, right?! This is often a very efficient means of manipulating data files because you are directly talking to the operating system instead of through Python libraries. We also don't have to write any code, we just have to know some syntax for terminal commands.Not impressed yet? Ok, how about creating a histogram indicating the number of sales per customer sorted in reverse numerical order? We've already got the list of customers and we can use an argument on `uniq` to get the count instead of just making a unique set. Then, we can use a second `sort` with arguments to reverse sort and use numeric rather than text-based sorting. This gives us a histogram:
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
41 Darren Budd
38 Ed Braxton
35 Brad Thomas
33 Carlos Soltero
30 Patrick Jones
29 Tony Sayre
28 Nora Price
28 Mark Cousins
28 Lena Creighton
28 Joy Smith
###Markdown
ExerciseModify the command so that you get a histogram of the shipping mode.
###Code
! csvcut -c 8 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
6270 Regular Air
1146 Delivery Truck
983 Express Air
###Markdown
CSV command-line kung fuYou might be surprised how much data slicing and dicing you can do from the command line using some simple tools and I/O redirection + piping. (See [A Quick Introduction to Pipes and Redirection](http://bconnelly.net/working-with-csvs-on-the-command-line/a-quick-introduction-to-pipes-and-redirection)). We've already seen I/O redirection where we took the output of a command and wrote it to a file (`/tmp/t.csv`): ```bash$ iconv -c -f utf-8 -t ascii SampleSuperstoreSales.csv > /tmp/t.csv``` Set up```bashpip install csvkit``` Extracting rows with grepNow, let me introduce you to the `grep` command that lets us filter the lines in a file according to a regular expression. Here's how to find all rows that contain `Annie Cyprus`:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv | head -3
###Output
249,1702,5/6/11,High,23,67.24,0.06,Regular Air,4.90,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent™ Tank-Style Highlighters,Wrap Bag,0.54,5/7/11
669,4676,8/31/11,High,11,1210.0515,0.04,Regular Air,-104.25,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,9/1/11
670,4676,8/31/11,High,50,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,9/2/11
###Markdown
We didn't have to write any code. We didn't have to jump into a development environment or editor. We just asked for the sales for Annie. If we want to write the data to a file, we simply redirect it:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv > /tmp/Annie.csv
! head -3 /tmp/Annie.csv # show first 3 lines of that new file
###Output
249,1702,5/6/11,High,23,67.24,0.06,Regular Air,4.90,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent™ Tank-Style Highlighters,Wrap Bag,0.54,5/7/11
669,4676,8/31/11,High,11,1210.0515,0.04,Regular Air,-104.25,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,9/1/11
670,4676,8/31/11,High,50,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,9/2/11
###Markdown
Filtering with csvgrep[csvkit](https://csvkit.readthedocs.io/en/1.0.3/) is an amazing package with lots of cool CSV utilities for use on the command line. `csvgrep` is one of them.If we want a specific row ID, then we need to use the more powerful `csvgrep` not just `grep`. We use a different regular expression that looks for a specific string at the left edge of a line (`^` means the beginning of a line, `$` means end of line or end of record):
###Code
! csvgrep -c 1 -r '^80$' -e latin1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
80,483,7/10/11,High,30,4965.7595,0.08,Regular Air,1198.97,195.99,3.99,Clay Rozendal,Nunavut,Nunavut,Corporate,Technology,Telephones and Communication,R380,Small Box,0.58,7/12/11
###Markdown
What if you want, say, two different rows?
###Code
! csvgrep -c 1 -r '^(80|160)$' -e latin1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
80,483,7/10/11,High,30,4965.7595,0.08,Regular Air,1198.97,195.99,3.99,Clay Rozendal,Nunavut,Nunavut,Corporate,Technology,Telephones and Communication,R380,Small Box,0.58,7/12/11
160,995,5/30/11,Medium,46,1815.49,0.03,Regular Air,782.91,39.89,3.04,Neola Schneider,Nunavut,Nunavut,Home Office,Furniture,Office Furnishings,Ultra Commercial Grade Dual Valve Door Closer,Wrap Bag,0.53,5/31/11
###Markdown
Beginning, end of files If we'd like to see just the header row, we can use `head`:
###Code
! head -1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
###Markdown
If, on the other hand, we want to see everything but that row, we can use `tail` (which I pipe to `head` so then I see only the first two lines of output):
###Code
! tail +2 data/SampleSuperstoreSales.csv | head -2
###Output
1.0,3.0,2010-10-13,Low,6.0,261.54,0.04,Regular Air,-213.25,38.94,35.0,Muhammed MacIntyre,Nunavut,Nunavut,Small Business,Office Supplies,Storage & Organization,"Eldon Base for stackable storage shelf, platinum",Large Box,0.8,2010-10-20
49.0,293.0,2012-10-01,High,49.0,10123.02,0.07,Delivery Truck,457.81,208.16,68.02,Barry French,Nunavut,Nunavut,Consumer,Office Supplies,Appliances,"1.7 Cubic Foot Compact ""Cube"" Office Refrigerators",Jumbo Drum,0.58,2012-10-02
tail: stdout: Broken pipe
###Markdown
The output would normally be many thousands of lines here so I have *piped* the output to the `head` command to print just the first two rows. We can pipe many commands together, sending the output of one command as input to the next command. ExerciseCount how many sales items there are in the `Technology` product category that are also `High` order priorities? Hint: `wc -l` counts the number of lines.
###Code
! grep Technology, data/SampleSuperstoreSales.csv | grep High, | wc -l
###Output
449
###Markdown
Extracting columns with csvcutExtracting columns is also pretty easy with `csvcut`. For example, let's say we wanted to get the customer name column (which is 12th by my count).
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | head -10
###Output
Customer Name
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
###Markdown
Actually, hang on a second. We don't want the `Customer Name` header to appear in the list so we combine with the `tail` we just saw to strip the header.
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | head -10
###Output
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
Dorothy Badders
tail: stdout: Broken pipe
###Markdown
What if we want a unique list? All we have to do is sort and then call `uniq`:
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq | head -10
###Output
Aaron Bergman
Aaron Hawkins
Aaron Smayling
Adam Bellavance
Adam Hart
Adam Shillingsburg
Adrian Barton
Adrian Hane
Adrian Shami
Aimee Bixby
###Markdown
You can get multiple columns at once in the order specified. For example, here is how to get the sales ID and the customer name together (name first then ID):
###Code
! csvcut -c 12,2 -e latin1 data/SampleSuperstoreSales.csv |head -10
###Output
Customer Name,Order ID
Muhammed MacIntyre,3.0
Barry French,293.0
Barry French,293.0
Clay Rozendal,483.0
Carlos Soltero,515.0
Carlos Soltero,515.0
Carl Jackson,613.0
Carl Jackson,613.0
Monica Federle,643.0
###Markdown
Naturally, we can write any of this output to a file using the `>` redirection operator. Let's do that and put each of those columns into a separate file and then `paste` them back with the customer name first.
###Code
! csvcut -c 2 -e latin1 data/SampleSuperstoreSales.csv > /tmp/IDs
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv > /tmp/names
! paste /tmp/names /tmp/IDs | head -10
###Output
Customer Name Order ID
Muhammed MacIntyre 3.0
Barry French 293.0
Barry French 293.0
Clay Rozendal 483.0
Carlos Soltero 515.0
Carlos Soltero 515.0
Carl Jackson 613.0
Carl Jackson 613.0
Monica Federle 643.0
###Markdown
Amazing, right?! This is often a very efficient means of manipulating data files because you are directly talking to the operating system instead of through Python libraries. We also don't have to write any code, we just have to know some syntax for terminal commands.Not impressed yet? Ok, how about creating a histogram indicating the number of sales per customer sorted in reverse numerical order? We've already got the list of customers and we can use an argument on `uniq` to get the count instead of just making a unique set. Then, we can use a second `sort` with arguments to reverse sort and use numeric rather than text-based sorting. This gives us a histogram:
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
41 Darren Budd
38 Ed Braxton
35 Brad Thomas
33 Carlos Soltero
30 Patrick Jones
29 Tony Sayre
28 Nora Price
28 Mark Cousins
28 Lena Creighton
28 Joy Smith
###Markdown
ExerciseModify the command so that you get a histogram of the shipping mode.
###Code
! csvcut -c 8 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
6270 Regular Air
1146 Delivery Truck
983 Express Air
###Markdown
CSV command-line kung fuYou might be surprised how much data slicing and dicing you can do from the command line using some simple tools and I/O redirection + piping. (See [A Quick Introduction to Pipes and Redirection](http://bconnelly.net/working-with-csvs-on-the-command-line/a-quick-introduction-to-pipes-and-redirection)). To motivate the use of command line, rather than just reading everything into Python, commandline tools will often process data much faster. (Although, In this case, we are using a Python based commandline tool.) Most importantly, you can launch many of these commands simultaneously from the commandline, computing everything in parallel using the multiple CPU core you have in your computer. If you have 4 core, you have the potential to process the data four times faster than a single-threaded Python program. Stripping char beyond 255 from commandlineIf there are characters within the file that are non-ASCII and larger than 255, we can convert the file using the command line. Here's a simple version of the problem I put into file `/tmp/foo.html`:```htmlགྷ```I deliberately injected a Unicode code point > 255, which requires two bytes to store. Most of the characters require just one byte. Here is first part of file:```bash$ od -c -t xC /tmp/t.html0000000 \n \n གྷ ** 3c 68 74 6d 6c 3e 0a 3c 62 6f 64 79 3e 0a e0 bd...``` Here is how you could strip any non-one-byte characters from the file before processing:```bash$ iconv -c -f utf-8 -t ascii /tmp/foo.html ```We've already seen I/O redirection where we took the output of a command and wrote it to a file. We can do the same here: ```bash$ iconv -c -f utf-8 -t ascii /tmp/foo.html > /tmp/foo2.html``` Set up for CSV on commandline```bashpip install csvkit``` Extracting rows with grepNow, let me introduce you to the `grep` command that lets us filter the lines in a file according to a regular expression. Here's how to find all rows that contain `Annie Cyprus`:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv | head -3
###Output
249,1702,5/6/11,High,23,67.24,0.06,Regular Air,4.90,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent™ Tank-Style Highlighters,Wrap Bag,0.54,5/7/11
669,4676,8/31/11,High,11,1210.0515,0.04,Regular Air,-104.25,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,9/1/11
670,4676,8/31/11,High,50,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,9/2/11
###Markdown
We didn't have to write any code. We didn't have to jump into a development environment or editor. We just asked for the sales for Annie. If we want to write the data to a file, we simply redirect it:
###Code
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv > /tmp/Annie.csv
! head -3 /tmp/Annie.csv # show first 3 lines of that new file
###Output
249,1702,5/6/11,High,23,67.24,0.06,Regular Air,4.90,2.84,0.93,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Pens & Art Supplies,SANFORD Liquid Accent™ Tank-Style Highlighters,Wrap Bag,0.54,5/7/11
669,4676,8/31/11,High,11,1210.0515,0.04,Regular Air,-104.25,125.99,7.69,Annie Cyprus,Nunavut,Nunavut,Home Office,Technology,Telephones and Communication,Timeport L7089,Small Box,0.58,9/1/11
670,4676,8/31/11,High,50,187.83,0.03,Regular Air,85.96,3.75,0.5,Annie Cyprus,Nunavut,Nunavut,Home Office,Office Supplies,Labels,Avery 510,Small Box,0.37,9/2/11
###Markdown
Filtering with csvgrep[csvkit](https://csvkit.readthedocs.io/en/1.0.3/) is an amazing package with lots of cool CSV utilities for use on the command line. `csvgrep` is one of them.If we want a specific row ID, then we need to use the more powerful `csvgrep` not just `grep`. We use a different regular expression that looks for a specific string at the left edge of a line (`^` means the beginning of a line, `$` means end of line or end of record):
###Code
! csvgrep -c 1 -r '^80$' -e latin1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
80,483,7/10/11,High,30,4965.7595,0.08,Regular Air,1198.97,195.99,3.99,Clay Rozendal,Nunavut,Nunavut,Corporate,Technology,Telephones and Communication,R380,Small Box,0.58,7/12/11
###Markdown
What if you want, say, two different rows?
###Code
! csvgrep -c 1 -r '^(80|160)$' -e latin1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
80,483,7/10/11,High,30,4965.7595,0.08,Regular Air,1198.97,195.99,3.99,Clay Rozendal,Nunavut,Nunavut,Corporate,Technology,Telephones and Communication,R380,Small Box,0.58,7/12/11
160,995,5/30/11,Medium,46,1815.49,0.03,Regular Air,782.91,39.89,3.04,Neola Schneider,Nunavut,Nunavut,Home Office,Furniture,Office Furnishings,Ultra Commercial Grade Dual Valve Door Closer,Wrap Bag,0.53,5/31/11
###Markdown
Beginning, end of files If we'd like to see just the header row, we can use `head`:
###Code
! head -1 data/SampleSuperstoreSales.csv
###Output
Row ID,Order ID,Order Date,Order Priority,Order Quantity,Sales,Discount,Ship Mode,Profit,Unit Price,Shipping Cost,Customer Name,Province,Region,Customer Segment,Product Category,Product Sub-Category,Product Name,Product Container,Product Base Margin,Ship Date
###Markdown
If, on the other hand, we want to see everything but that row, we can use `tail` (which I pipe to `head` so then I see only the first two lines of output):
###Code
! tail +2 data/SampleSuperstoreSales.csv | head -2
###Output
1,3,10/13/10,Low,6,261.54,0.04,Regular Air,-213.25,38.94,35,Muhammed MacIntyre,Nunavut,Nunavut,Small Business,Office Supplies,Storage & Organization,"Eldon Base for stackable storage shelf, platinum",Large Box,0.8,10/20/10
49,293,10/1/12,High,49,10123.02,0.07,Delivery Truck,457.81,208.16,68.02,Barry French,Nunavut,Nunavut,Consumer,Office Supplies,Appliances,"1.7 Cubic Foot Compact ""Cube"" Office Refrigerators",Jumbo Drum,0.58,10/2/12
tail: stdout: Broken pipe
###Markdown
The output would normally be many thousands of lines here so I have *piped* the output to the `head` command to print just the first two rows. We can pipe many commands together, sending the output of one command as input to the next command. ExerciseCount how many sales items there are in the `Technology` product category that are also `High` order priorities? Hint: `wc -l` counts the number of lines.
###Code
! grep Technology, data/SampleSuperstoreSales.csv | grep High, | wc -l
###Output
449
###Markdown
Extracting columns with csvcutExtracting columns is also pretty easy with `csvcut`. For example, let's say we wanted to get the customer name column (which is 12th by my count).
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | head -10
###Output
Customer Name
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
###Markdown
Actually, hang on a second. We don't want the `Customer Name` header to appear in the list so we combine with the `tail` we just saw to strip the header.
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | head -10
###Output
Muhammed MacIntyre
Barry French
Barry French
Clay Rozendal
Carlos Soltero
Carlos Soltero
Carl Jackson
Carl Jackson
Monica Federle
Dorothy Badders
tail: stdout: Broken pipe
###Markdown
What if we want a unique list? All we have to do is sort and then call `uniq`:
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq | head -10
###Output
Aaron Bergman
Aaron Hawkins
Aaron Smayling
Adam Bellavance
Adam Hart
Adam Shillingsburg
Adrian Barton
Adrian Hane
Adrian Shami
Aimee Bixby
###Markdown
You can get multiple columns at once in the order specified. For example, here is how to get the sales ID and the customer name together (name first then ID):
###Code
! csvcut -c 12,2 -e latin1 data/SampleSuperstoreSales.csv |head -10
###Output
Customer Name,Order ID
Muhammed MacIntyre,3
Barry French,293
Barry French,293
Clay Rozendal,483
Carlos Soltero,515
Carlos Soltero,515
Carl Jackson,613
Carl Jackson,613
Monica Federle,643
###Markdown
Naturally, we can write any of this output to a file using the `>` redirection operator. Let's do that and put each of those columns into a separate file and then `paste` them back with the customer name first.
###Code
! csvcut -c 2 -e latin1 data/SampleSuperstoreSales.csv > /tmp/IDs
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv > /tmp/names
! paste /tmp/names /tmp/IDs | head -10
###Output
Customer Name Order ID
Muhammed MacIntyre 3
Barry French 293
Barry French 293
Clay Rozendal 483
Carlos Soltero 515
Carlos Soltero 515
Carl Jackson 613
Carl Jackson 613
Monica Federle 643
###Markdown
Amazing, right?! This is often a very efficient means of manipulating data files because you are directly talking to the operating system instead of through Python libraries. We also don't have to write any code, we just have to know some syntax for terminal commands.Not impressed yet? Ok, how about creating a histogram indicating the number of sales per customer sorted in reverse numerical order? We've already got the list of customers and we can use an argument on `uniq` to get the count instead of just making a unique set. Then, we can use a second `sort` with arguments to reverse sort and use numeric rather than text-based sorting. This gives us a histogram:
###Code
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
41 Darren Budd
38 Ed Braxton
35 Brad Thomas
33 Carlos Soltero
30 Patrick Jones
29 Tony Sayre
28 Nora Price
28 Mark Cousins
28 Lena Creighton
28 Joy Smith
###Markdown
ExerciseModify the command so that you get a histogram of the shipping mode.
###Code
! csvcut -c 8 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
###Output
6270 Regular Air
1146 Delivery Truck
983 Express Air
|
src/notebooks/250-basic-stacked-area-chart.ipynb
|
###Markdown
A basic stacked area chart can be plotted by the `stackplot()` function of matplotlib. The parameters passed to the function are:* `x` : x axis positions* `y` : y axis positions* `labels` : labels to assign to each data seriesNote that for `y` input, as you can give a sequence of arrays, you can also give multiple arrays. The example below shows both ways.
###Code
# libraries
import numpy as np
import matplotlib.pyplot as plt
# --- FORMAT 1
# Your x and y axis
x=range(1,6)
y=[ [1,4,6,8,9], [2,2,7,10,12], [2,8,5,10,6] ]
# Basic stacked area chart.
plt.stackplot(x,y, labels=['A','B','C'])
plt.legend(loc='upper left')
plt.show()
# --- FORMAT 2
x=range(1,6)
y1=[1,4,6,8,9]
y2=[2,2,7,10,12]
y3=[2,8,5,10,6]
# Basic stacked area chart.
plt.stackplot(x,y1, y2, y3, labels=['A','B','C'])
plt.legend(loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Welcome in the introductory template of the python graph gallery. Here is how to proceed to add a new `.ipynb` file that will be converted to a blogpost in the gallery! Notebook Metadata It is very important to add the following fields to your notebook. It helps building the page later on:- **slug**: the URL of the blogPost. It should be exactly the same as the file title. Example: `70-basic-density-plot-with-seaborn`- **chartType**: the chart type like density or heatmap. For a complete list see [here](https://github.com/holtzy/The-Python-Graph-Gallery/blob/master/src/util/sectionDescriptions.js), it must be one of the `id` options.- **title**: what will be written in big on top of the blogpost! use html syntax there.- **description**: what will be written just below the title, centered text.- **keyword**: list of keywords related with the blogpost- **seoDescription**: a description for the bloppost meta. Should be a bit shorter than the description and must not contain any html syntax. Add a chart description A chart example always come with some explanation. It must:contain keywordslink to related pages like the parent page (graph section)give explanations. In depth for complicated charts. High level for beginner level charts Add a chart
###Code
import seaborn as sns, numpy as np
np.random.seed(0)
x = np.random.randn(100)
ax = sns.distplot(x)
###Output
_____no_output_____
|
Course04/19-18_UKF/UKF.ipynb
|
###Markdown
UKFIn this exercise, you will become familiar with the UKF method which is a robust tool for estimating the value of the measured quantity. Later in the exercise, you will apply it to estimate the position of the one-dimensional quadcopter with can move only in the vertical axis. Next, you will create the class that will have all the functions needed to perform the localization of the object in the one-dimensional environment. As mentioned For simplicity, will use a drone that can only move in the vertical direction for the given drone the state function is simply vertical position and velocity $X=(\dot{z},z)$. The control input for the drone is the vertical acceleration $u = \ddot{z}$. For KF we have to define the measurement error associated with the measuring the hight variable.
###Code
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import math
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import jdc
from ipywidgets import interactive
from scipy.stats import multivariate_normal
from scipy.linalg import sqrtm
pylab.rcParams['figure.figsize'] = 10, 10
pylab.rcParams['figure.figsize'] = 10, 10
###Output
_____no_output_____
###Markdown
UKF As a reminder from the theory let us list the constants used in UKF method.* $N$ represents the configuration space dimension and in this case, it is equal to 2. * $\lambda$ is a scaling parameter. $\lambda = \alpha^2 (N+k)-N$* $\gamma$ describes how far from the mean we would like to select the sigma points along the eigenvectors. $\gamma =\sqrt{N+\lambda}$* $\alpha$ determins the spread of the sigma points and it set as $1$.* $k$ is the secondary scaling parameter which is set to $3-N$.* Finally $\beta$ is set 2 as we assume that the distribution is Gaussian in nature.
###Code
class UKF:
def __init__(self,
sensor_sigma, # Motion noise
velocity_sigma, # Velocity uncertainty
position_sigma, # Velocity uncertainty
dt # dt time between samples
):
# Sensor measurement covariance
self.r_t = np.array([[sensor_sigma**2]])
# Motion model noise for velocity and position
self.q_t = np.array([[velocity_sigma**2,0.0],
[0.0,position_sigma**2]])
self.dt = dt
self.mu = np.array([[0.0],
[0.0]])
self.sigma = np.array([[0.0, 0.0],
[0.0, 0.0]])
self.mu_bar = self.mu
self.sigma_bar = self.sigma
self.n = self.q_t.shape[0]
self.sigma_points = np.zeros((self.n, 2*self.n+1))
# Creating the contestants
self.alpha = 1.0
self.betta = 2.0
self.k = 3.0 - self.n
self.lam = self.alpha**2 * (self.n + self.k) - self.n
self.gamma = np.sqrt(self.n + self.lam)
self.x_bar = self.sigma_points
def initial_values(self,mu_0, sigma_0):
self.mu = mu_0
self.sigma = sigma_0
###Output
_____no_output_____
###Markdown
Declaring the initial values and initializing the object.
###Code
z = 2.0 # Initial position
v = 1.0 # Initial velocity
dt = 1.0 # The time difference between measures
motion_error = 0.01 # Sensor sigma
velocity_sigma = 0.01 # Velocity uncertainty
position_sigma = 0.01 # Position uncertainty
mu_0 = np.array([[v],
[z]])
cov_0 = np.array([[velocity_sigma**2, 0.0],
[0.0, position_sigma**2]])
u = np.array([0.0]) # no commant is given \ddot{z} = 0
MYUKF=UKF(motion_error, velocity_sigma, position_sigma, dt)
MYUKF.initial_values(mu_0, cov_0)
###Output
_____no_output_____
###Markdown
Compute Sigma points In this step, we will implement the compute sigma step that takes the mean and covariance matrix and returns the points selected around the mean point. $$X_{i,t} = \Bigg \{ \begin{array}{l l} =x_t & i=0 \\=x_t+\gamma S_i & i=1,...,N \\=x_t-\gamma S_{i-N} & i=N+1,...,2N \end{array}$$$S_i$ is the $i^{th}$ column of $S=\sqrt{\Sigma}$ PredictAs a reminder from the previous 1D case we know that the transition function has the next form:$$g(x_t,u_t,\Delta t) = \begin{bmatrix} 1 & 0 \\ \Delta t & 1 \end{bmatrix} \begin{bmatrix} \dot{z}\\z \end{bmatrix} + \begin{bmatrix} \Delta t \\ 0 \end{bmatrix} \begin{bmatrix} \ddot{z} \end{bmatrix} = A_t \mu_{t-1}+B_tu_t$$The partial derivative of the $g$ relative to each component:$$g'(x_t,u_t,\Delta t) = \begin{bmatrix} 1 & 0 \\ \Delta t & 1 \end{bmatrix}$$As $A$ and $B$ matrixes, in general, depend on the external parameters we declare them as the separate functions.
###Code
print(list(range(1, 10)))
%%add_to UKF
def compute_sigmas(self):
S = sqrtm(self.sigma)
# TODO: Implement the sigma points
self.sigma_points[:, 0] = self.mu.squeeze()
self.sigma_points[:, 1:self.n+1] = self.mu + self.gamma * S
self.sigma_points[:, self.n+1:2*self.n+1] = self.mu - self.gamma * S
return self.sigma_points
@property
def a(self):
return np.array([[1.0, 0.0],
[self.dt, 1.0]])
@property
def b(self):
return np.array([[self.dt],
[0.0]])
def g(self,u):
g = np.zeros((self.n, self.n+1))
g = np.matmul(self.a, self.sigma_points) + self.b * u
return g
def predict(self, u):
# TODO: Implement the predicting step
self.compute_sigmas()
x_bar = self.g(u)
self.x_bar = x_bar
return x_bar
###Output
_____no_output_____
###Markdown
Predicting the next position based on the initial data
###Code
u = 0 # no control input is given
print(MYUKF.predict(0))
###Output
[[ 1. 1.01732051 1. 0.98267949 1. ]
[ 3. 3.01732051 3.01732051 2.98267949 2.98267949]]
###Markdown
UpdateOnes we selected sigma points and predicted the new state of the sigma points now it is time to estimate the value based on the predicted sigma points and the measured value. As a reminder, the weights for the mean and covariance are given below.weights for the mean:$$w_i^m = \Bigg \{ \begin{array}{l l} =\frac{\lambda}{N+\lambda} & i=0 \\=\frac{1}{2(N+\lambda)} & i>0\end{array}$$Weights for computing the covariance:$$w_i^c=\Bigg \{\begin{array}{l l} =\frac{\lambda}{N+\lambda} +(1-\alpha^2+\beta^2) & i=0 \\=\frac{1}{2(N+\lambda)} & i>0 \end{array}$$
###Code
%%add_to UKF
@property
def weights_mean(self):
w_m = np.zeros((2*self.n+1, 1))
# TODO: Calculate the weight to calculate the mean based on the predicted sigma points
w_m[0] = self.lam / (self.n + self.lam)
w_m[1:] = 0.5 / (self.n + self.lam)
self.w_m = w_m
return w_m
@property
def weights_cov(self):
w_cov = np.zeros((2*self.n+1, 1))
# TODO: Calculate the weight to calculate the covariance based on the predicted sigma points
w_cov[0] = self.lam / (self.n + self.lam) + (1 - self.alpha**2 - self.betta**2)
w_cov[1:] = 0.5 / (self.n + self.lam)
self.w_cov = w_cov
return w_cov
def h(self,Z):
return np.matmul(np.array([[0.0, 1.0]]), Z)
def update(self,z_in):
# TODO: Implement the update step
mu_bar = self.x_bar @ self.weights_mean
cov_bar = self.weights_cov.T * (self.x_bar - mu_bar) @ (self.x_bar - mu_bar).T + self.q_t
z = self.h(self.x_bar)
mu_z = z @ self.weights_mean
cov_z = self.weights_cov.T * (z - mu_z) @ (z - mu_z).T + self.r_t
cov_xz = self.weights_cov.T * (self.x_bar - mu_bar) @ (z - mu_z).T
k = cov_xz @ np.linalg.pinv(cov_z)
mu_t = mu_bar + k * (z_in - mu_z)
cov_t = cov_bar - k @ cov_z @ k.T
self.mu = mu_t
self.sigma = cov_t
return mu_t, cov_t
###Output
_____no_output_____
###Markdown
Updating the estimated value based on the measurement.
###Code
z_measured = 3.11
print(MYUKF.update(z_measured))
###Output
(array([[ 1.03666667],
[ 3.07333333]]), array([[ 1.66666667e-04, 3.33333333e-05],
[ 3.33333333e-05, 1.66666667e-04]]))
###Markdown
UKF + PIDIn this section, the drone is controlled using the altitude estimated by UKF filter.
###Code
from CoaxialDrone import CoaxialCopter
from PIDcontroller import PIDController_with_ff
from PathGeneration import flight_path
###Output
_____no_output_____
###Markdown
First, we will generate the flight path which is constant height of 1m.
###Code
total_time = 10.0 # Total flight time
dt = 0.01 # Time intervale between measurements
t, z_path, z_dot_path, z_dot_dot_path = flight_path(total_time, dt,'constant' )
###Output
_____no_output_____
###Markdown
IMUFor this section, we will use a simple IMU which only adds noise to the actual altitude measurements.
###Code
class IMU:
def __init__(self):
pass
def measure(self, z, sigma=0.001):
return z + np.random.normal(0.0, sigma)
from DronewithPIDControllerUKF import DronewithPIDUKF
sensor_error = 0.1
velocity_sigma = 0.1
position_sigma = 0.1
MYUKF = UKF(sensor_error, velocity_sigma, position_sigma, dt)
#Initializing the drone with PID controller and providing information of the desired flight path.
FlyingDrone = DronewithPIDUKF(z_path, z_dot_path, z_dot_dot_path, t, dt, IMU, UKF)
interactive_plot = interactive(FlyingDrone.PID_controler_with_KF,
position_sigma = (0.0, 0.1, 0.001),
motion_sigma = (0.0, 0.1, 0.001))
output = interactive_plot.children[-1]
output.layout.height = '800px'
interactive_plot
###Output
_____no_output_____
|
bhsa/export.ipynb
|
###Markdown
You might want to consider the [start](search.ipynb) of this tutorial.Short introductions to other TF datasets:* [Dead Sea Scrolls](https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/lorentz2020/dss.ipynb),* [Old Babylonian Letters](https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/lorentz2020/oldbabylonian.ipynb),or the* [Q'uran](https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/lorentz2020/quran.ipynb) Export to Emdros MQL[EMDROS](http://emdros.org), written by Ulrik Petersen,is a text database system with the powerful *topographic* query language MQL.The ideas are based on a model devised by Christ-Jan Doedens in[Text Databases: One Database Model and Several Retrieval Languages](https://books.google.nl/books?id=9ggOBRz1dO4C).Text-Fabric's model of slots, nodes and edges is a fairly straightforward translation of the models of Christ-Jan Doedens and Ulrik Petersen.[SHEBANQ](https://shebanq.ancient-data.org) uses EMDROS to offer users to execute and save MQL queries against the Hebrew Text Database of the ETCBC.So it is kind of logical and convenient to be able to work with a Text-Fabric resource through MQL.If you have obtained an MQL dataset somehow, you can turn it into a text-fabric data set by `importMQL()`,which we will not show here.And if you want to export a Text-Fabric data set to MQL, that is also possible.After the `Fabric(modules=...)` call, you can call `exportMQL()` in order to save all features of theindicated modules into a big MQL dump, which can be imported by an EMDROS database.
###Code
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
IncantationThe ins and outs of installing Text-Fabric, getting the corpus, and initializing a notebook areexplained in the [start tutorial](start.ipynb).
###Code
from tf.app import use
# A = use('bhsa', hoist=globals())
A = use("bhsa:clone", checkout="clone", hoist=globals())
TF.exportMQL("mybhsa", "~/Downloads")
###Output
0.00s Checking features of dataset mybhsa
###Markdown
You might want to consider the [start](search.ipynb) of this tutorial.Short introductions to other TF datasets:* [Dead Sea Scrolls](https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/lorentz2020/dss.ipynb),* [Old Babylonian Letters](https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/lorentz2020/oldbabylonian.ipynb),or the* [Q'uran](https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/lorentz2020/quran.ipynb) Export to Emdros MQL[EMDROS](http://emdros.org), written by Ulrik Petersen,is a text database system with the powerful *topographic* query language MQL.The ideas are based on a model devised by Christ-Jan Doedens in[Text Databases: One Database Model and Several Retrieval Languages](https://books.google.nl/books?id=9ggOBRz1dO4C).Text-Fabric's model of slots, nodes and edges is a fairly straightforward translation of the models of Christ-Jan Doedens and Ulrik Petersen.[SHEBANQ](https://shebanq.ancient-data.org) uses EMDROS to offer users to execute and save MQL queries against the Hebrew Text Database of the ETCBC.So it is kind of logical and convenient to be able to work with a Text-Fabric resource through MQL.If you have obtained an MQL dataset somehow, you can turn it into a text-fabric data set by `importMQL()`,which we will not show here.And if you want to export a Text-Fabric data set to MQL, that is also possible.After the `Fabric(modules=...)` call, you can call `exportMQL()` in order to save all features of theindicated modules into a big MQL dump, which can be imported by an EMDROS database.
###Code
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
IncantationThe ins and outs of installing Text-Fabric, getting the corpus, and initializing a notebook areexplained in the [start tutorial](start.ipynb).
###Code
from tf.app import use
# A = use('bhsa', hoist=globals())
A = use("bhsa:clone", checkout="clone", hoist=globals())
TF.exportMQL("mybhsa", "~/Downloads")
###Output
0.00s Checking features of dataset mybhsa
|
notebooks/datasets_descriptions.ipynb
|
###Markdown
The notebook for dataset summary description table generation.
###Code
import pandas as pd
from drsu.config import DRSUConfiguration
from drsu.datasets import ALL_DESCRIPTORS, as_pandas, download_and_transform_dataset
DRSUConfiguration.local_dataset_dir = '../data'
RESULTS_DIR = '../results'
DATASETS = []
for dd in ALL_DESCRIPTORS:
if dd.id.startswith('amz_'):
if dd.n_rows > 1000000:
continue
DATASETS.append(dd)
print('Chosen Datasets: ', [dd.name for dd in DATASETS])
for dd in DATASETS:
download_and_transform_dataset(dd, verbose=False)
print(f'"{dd.name}" ready')
res = pd.DataFrame(columns=['Rows', '# of Users', '# of Items', 'Avg RPU', 'Avg RPI'], index=[dd.name for dd in DATASETS])
for dd in DATASETS:
df = as_pandas(dd)
res['Rows'][dd.name] = len(df)
res['# of Users'][dd.name] = df['user_id'].nunique()
res['# of Items'][dd.name] = df['item_id'].nunique()
res['Avg RPU'][dd.name] = f"{res['Rows'][dd.name] / res['# of Users'][dd.name]:.2f}"
res['Avg RPI'][dd.name] = f"{res['Rows'][dd.name] / res['# of Items'][dd.name]:.2f}"
res
###Output
_____no_output_____
|
courses/dl2/cifar10-dawn.ipynb
|
###Markdown
CIFAR 10
###Code
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from fastai.conv_learner import *
from fastai.models.cifar10.wideresnet import wrn_22
torch.backends.cudnn.benchmark = True
PATH = Path("data/cifar10/")
os.makedirs(PATH,exist_ok=True)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
stats = (np.array([ 0.4914 , 0.48216, 0.44653]), np.array([ 0.24703, 0.24349, 0.26159]))
bs=512
sz=32
tfms = tfms_from_stats(stats, sz, aug_tfms=[RandomCrop(sz), RandomFlip()], pad=sz//8)
data = ImageClassifierData.from_paths(PATH, val_name='test', tfms=tfms, bs=bs)
m = wrn_22()
learn = ConvLearner.from_model_data(m, data)
learn.crit = nn.CrossEntropyLoss()
learn.metrics = [accuracy]
wd=1e-4
lr=1.5
%time learn.fit(lr, 1, wds=wd, cycle_len=30, use_clr_beta=(20,20,0.95,0.85))
###Output
_____no_output_____
###Markdown
**Important: This notebook will only work with fastai-0.7.x. Do not try to run any fastai-1.x code from this path in the repository because it will load fastai-0.7.x** CIFAR 10
###Code
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from fastai.conv_learner import *
from fastai.models.cifar10.wideresnet import wrn_22
torch.backends.cudnn.benchmark = True
PATH = Path("data/cifar10/")
os.makedirs(PATH,exist_ok=True)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
stats = (np.array([ 0.4914 , 0.48216, 0.44653]), np.array([ 0.24703, 0.24349, 0.26159]))
bs=512
sz=32
tfms = tfms_from_stats(stats, sz, aug_tfms=[RandomCrop(sz), RandomFlip()], pad=sz//8)
data = ImageClassifierData.from_paths(PATH, val_name='test', tfms=tfms, bs=bs)
m = wrn_22()
learn = ConvLearner.from_model_data(m, data)
learn.crit = nn.CrossEntropyLoss()
learn.metrics = [accuracy]
wd=1e-4
lr=1.5
%time learn.fit(lr, 1, wds=wd, cycle_len=30, use_clr_beta=(20,20,0.95,0.85))
###Output
_____no_output_____
###Markdown
CIFAR 10
###Code
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from fastai.conv_learner import *
from fastai.models.cifar10.wideresnet import wrn_22
torch.backends.cudnn.benchmark = True
PATH = Path("data/cifar10/")
os.makedirs(PATH,exist_ok=True)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
stats = (np.array([ 0.4914 , 0.48216, 0.44653]), np.array([ 0.24703, 0.24349, 0.26159]))
bs=512
sz=32
tfms = tfms_from_stats(stats, sz, aug_tfms=[RandomCrop(sz), RandomFlip()], pad=sz//8)
data = ImageClassifierData.from_paths(PATH, val_name='test', tfms=tfms, bs=bs)
m = wrn_22()
learn = ConvLearner.from_model_data(m, data)
learn.crit = nn.CrossEntropyLoss()
learn.metrics = [accuracy]
wd=1e-4
lr=1.5
%time learn.fit(lr, 1, wds=wd, cycle_len=30, use_clr_beta=(20,20,0.95,0.85))
###Output
_____no_output_____
|
experiments/preprocessing_performance.ipynb
|
###Markdown
LCSim
###Code
fnames = {"lcsim1500_bigru1_m0":"no scaling",
"lcsim1500_bigru1_m1":"sigma scaling",
"lcsim1500_bigru1_m2":"derivatives"}
metric, split = "avgprc", "test"
make_plot(fnames, metric, split, ylbl="Average \nprecision (AP)", ylbl2="AP",
title="LCSim-1500", legend_next=1, figsize=(8.2,3), prepend="\n ",
round_lst=[[3,3],[3,3],[3,3]])
plt.savefig("figures/lcsim1500_AP_pp-basic.pdf")
plt.show()
# metric, split = "aucroc", "test"
# make_plot(fnames, metric, split, ylbl="AUCROC", title="LCSim-1500 test")
get_aps(fnames, metric, split)
fnames = {"lcsim1500gaps_bigru1_m1_n1":"zero-filling",
'lcsim1500gaps_bigru1_m1_n2':"lin. interp.",
'lcsim1500gaps_bigru1gen_m1_n0':"generative"}
metric, split = "avgprc", "test"
make_plot(fnames, metric, split, ylbl="Average \nprecision (AP)", ylbl2="AP",
title="LCSim-1500-Gap", figsize=(8.3,3), legend_next=True, prepend="\n ",
round_lst=[[3,3],[3,3],[3,3]])
plt.savefig("figures/lcsim1500_AP_pp-gaps.pdf")
plt.show()
# metric, split = "aucroc", "test"
# make_plot(fnames, metric, split, ylbl="AUCROC", title="LCSim-1500 test")
get_aps(fnames, metric, split)
###Output
zero-filling : 0.417 +/- 0.0056 (0.411 , 0.422 )
lin. interp. : 0.438 +/- 0.0088 (0.429 , 0.447 )
generative : 0.456 +/- 0.0071 (0.449 , 0.463 )
###Markdown
Lilith
###Code
fnames = {'lilith1500basic_bigru1_m0_n1': "no scaling, zero-filling",
'lilith1500basic_bigru1_m0_n2': "no scaling, lin. interp.",
'lilith1500basic_bigru1_m1_n1': "sigma scaling, zero-filling",
'lilith1500basic_bigru1_m1_n2': "sigma scaling, lin. interp."}
#'lilith1500basic_bigru1_m2_n2': "sigma scaling, lin. interp., derivatives"}
metric, split = "avgprc", "test"
make_plot(fnames, metric, split, ylbl="Average \nprecision (AP)", ylbl2="AP",
title="Lilith-1500", legend_next=True, figsize=(9.1,3), prepend="\n ",
round_lst=[[3,3] for i in range(4)],
append_lst = [["",""], ["0",""],["",""],["",""]], bbox_help=(1,1.1))
plt.savefig("figures/lilith1500_AP_pp-basic-gaps.pdf")
plt.show()
# metric, split = "aucroc", "test"
# make_plot(fnames, metric, split, ylbl="AUCROC", title="Lilith-1500 test")
get_aps(fnames, metric, split)
fnames = {'lilith1500basic_bigru1_m1_w3sqrt':"basic",
'lilith1500basic_bigru1centr_m1_n2_w3sqrt':"centroids",
'lilith1500hrd_bigru1_m1_n2_w3sqrt':"HRD",
'lilith1500lrd_bigru1_m1_n2_w3sqrt':"LRD",
'lilith1500outlier_bigru1_m1_n2_w3sqrt':"outliers"}
metric, split = "avgprc", "test"
make_plot(fnames, metric, split, ylbl="Average \nprecision (AP)", ylbl2="AP",
title="Lilith-1500",pr_curve=False, legend_next=True, figsize=(9.2,3),
round_lst=[[3,3] for i in range(5)], prepend=" ",
append_lst = [["",""], ["",""],["",""],["0",""],["",""]])
plt.savefig("figures/lilith1500_AP_pp-advanced.pdf")
plt.show()
# metric, split = "aucroc", "test"
# make_plot(fnames, metric, split, ylbl="AUCROC", title="Lilith-1500 test")
get_aps(fnames, metric, split)
###Output
basic : 0.555 +/- 0.016 (0.539 , 0.57 )
centroids : 0.563 +/- 0.0025 (0.56 , 0.565 )
HRD : 0.564 +/- 0.0029 (0.561 , 0.567 )
LRD : 0.57 +/- 0.0016 (0.569 , 0.572 )
outliers : 0.488 +/- 0.039 (0.449 , 0.527 )
###Markdown
Data gapsSimulated gaps, zero-filling, linear interpolation, predicting missing values.
###Code
with open("results/low_risk_flatten_lc.pkl", "rb") as f:
lc = pickle.load(f)
toffs = lc["time"][0]
lc["time"]-=toffs
sigma = np.nanstd(flatten(lc["time"], lc["flux"], method="median", window_length=utils.hour2day(0.5)))
fluxes = [lc["flux"], lc["flat"], np.diff(lc["flux"], prepend=lc["flux"][0])]
fluxes2 = [lc["flux"], np.diff(lc["flux"]/sigma, prepend=lc["flux"][0]),lc["flat"]]
spans = [i-toffs for i in [1321, 1327]]
span_t = utils.min2day(1500 * 2)
cs = [plt.plot([])[0].get_color() for i in range(2)]
plt.close()
titles=["Raw flux", "Flux derivative", "Low-risk detrended"]
plt.figure(figsize=(8,4))
for i, fl in enumerate(fluxes2):
if i!=1:
plt.subplot(3,3,i+1)
plt.title(titles[i], fontsize=14)
for j, sp_t in enumerate(spans):
plt.axvspan(sp_t, sp_t+span_t, alpha=0.2, zorder=-1, color=cs[j])
vis.plot(lc["time"], fl)
vis.plot(lc["time"], lc["trend"], scatter=0, c="red", s=1.5) if i==0 else None
plt.xlim(10,20)
plt.ylim(0.925, 1.05) if i != 1 else plt.ylim(0.925-1, 1.05-1)
plt.ylabel("Flux", fontsize=14) if i==0 else None
plt.xticks([10,12,14,16,18,20],[10,12,14,16,18,20],fontsize=13)
for j, sp_t in enumerate(spans):
plt.subplot(3,3,i+1+(j+1)*3)
plt.title(titles[i], fontsize=14) if i==1 and j==0 else None
plt.axvspan(sp_t, sp_t+span_t, alpha=0.1, zorder=-1, color=cs[j])
plt.xlim(sp_t, sp_t+span_t)
msk = (lc["time"]>=sp_t) & (lc["time"]<=sp_t+span_t)
if i != 1:
vis.plot(lc["time"][msk], (fl[msk]-1)/sigma)
else:
vis.plot(lc["time"][msk], fl[msk])
plt.ylim(-7,7) if i!=0 else None
plt.xticks(fontsize=13)
plt.yticks(fontsize=13)
plt.xlabel("Time (days)", fontsize=14) if j==1 else None
plt.ylabel("Flux*", fontsize=14) if i==0 else None
# plt.show()
plt.tight_layout()
plt.savefig("figures/input-range-example.pdf")
plt.show()
from dataloading import loading as dl
sttngs=['gaps',"zero","lininterp"]
nanmodes = [0,1,2]
loaders = {s:{} for s in sttngs}
loaders["original"] = {}
np.random.seed(42)
# purposely used 'valid' here instead of 'train' for train_path
loaders["original"]["show"], _, _ = dl.get_loaders_fn(train_path="data/nn/LCSim-1500/valid",
valid_path="data/nn/LCSim-1500/valid", test_path=None,
train_batch=128, valid_batch=1000, mode=0, nanmode=0,
scale_median=0, standardize=0, incl_centr=False, insert_gaps=False)
for s, n in zip(sttngs,nanmodes):
print(s, "show")
np.random.seed(42)
loaders[s]["show"], _, _ = dl.get_loaders_fn(train_path="data/nn/LCSim-1500/valid",
valid_path="data/nn/LCSim-1500/valid", test_path=None,
train_batch=128, valid_batch=1000, mode=0, nanmode=n,
scale_median=0, standardize=0, incl_centr=False, insert_gaps=True)
np.random.seed(42)
print(s, "rnn")
loaders[s]["rnn"], _, _ = dl.get_loaders_fn(train_path="data/nn/LCSim-1500/valid",
valid_path="data/nn/LCSim-1500/valid", test_path=None,
train_batch=128, valid_batch=1000, mode=1, nanmode=n,
scale_median=0, standardize=1, incl_centr=False, insert_gaps=True)
mnames = {'lcsim1500gaps_bigru1_m1_n1': "zero-filling",
'lcsim1500gaps_bigru1_m1_n2':"linear interpolation",
'lcsim1500gaps_bigru1gen_m1_n0':"generative network"}
models = {}
for mname, lbl in mnames.items():
models[lbl] = torch.load("models_all/"+mname+"/model_0.pt")
# pick rn seed
rn_seed = [42, 212][1]
np.random.seed(rn_seed)
# tr = np.where(loaders["gaps"]["show"].dataset.transit)[0]
gp = np.where(np.isnan(loaders["gaps"]["show"].dataset.flux).sum(1) > 200)[0]
rd = np.where(loaders["gaps"]["show"].dataset.rdepth.max(1)[0] > 2)[0]
idx = np.random.choice(np.intersect1d(gp,rd))
with torch.no_grad():
pts_zero, _ = models["zero-filling"](loaders["zero"]["rnn"].dataset.flux[idx].view(1,-1))
pts_interp, _ = models["linear interpolation"](loaders["lininterp"]["rnn"].dataset.flux[idx].view(1,-1))
pts_gen, _, preds = models["generative network"](loaders["gaps"]["rnn"].dataset.flux[idx].view(1,-1))
pts_zero, pts_interp, pts_gen, preds = pts_zero.numpy(), pts_interp.numpy(), pts_gen.numpy(), preds.numpy()
sigma_est = np.nanstd(flatten(t[:300], flux[:300], method="median", window_length=utils.min2day(30)))
gap_mean = -0.094193729368438
gap_std = 2.6607948866270665
fix_pred = lambda p: (((p*gap_std)+gap_mean)*sigma_est+1)
t = np.arange(1500) * utils.min2day(2)
flux = loaders["original"]["show"].dataset.flux[idx].numpy()+1
flux_gap = loaders["gaps"]["show"].dataset.flux[idx].numpy()+1
flux_zero = loaders["zero"]["show"].dataset.flux[idx].numpy()+1
flux_interp = loaders["lininterp"]["show"].dataset.flux[idx].numpy()+1
m = loaders["original"]["show"].dataset.mask[idx].numpy().astype(bool)
w = 4
fig = plt.figure(figsize=(13,3))
gs = fig.add_gridspec(2,4)
pts_list = [None, pts_zero, pts_interp, pts_gen]
flux_list = [flux, flux_zero, flux_interp, flux_gap]
titles = ["Original", "Zero-filled gaps", "Linear interpolation", "Generative network"]
for i, fl in enumerate(flux_list):
fig.add_subplot(gs[0,i])
plt.title(titles[i], fontsize=14)
vis.plot(t[~m], fl[~m])
vis.plot(t[m], fl[m], c="orange")
plt.yticks(fontsize=13) if i==0 else plt.yticks([])
plt.xticks([]) if i>0 else plt.xticks(fontsize=13)
plt.xlabel("Time (days) ", fontsize=14) if i==0 else None
plt.ylabel("Flux", fontsize=14) if i==0 else None
if i==0:
continue
fig.add_subplot(gs[1,i])
# plt.subplot(2,w,i+1+w)
pts = pts_list[i]
vis.plot(t, gaussian_filter1d(pts.squeeze(),9), scatter=0)
plt.ylim(-0.05,1.05)
plt.xticks([])
plt.yticks(fontsize=13) if i==1 else plt.yticks([])
plt.ylabel('RNN \noutput ', fontsize=14) if i==1 else None
plt.savefig("figures/gap_examples.pdf")
# plt.tight_layout()
sigma_est = np.nanstd(flatten(t[:300], flux[:300], method="median", window_length=utils.min2day(30)))
gap_mean = -0.094193729368438
gap_std = 2.6607948866270665
fix_pred = lambda p: (((p*gap_std)+gap_mean)*sigma_est+1)
plt.figure(figsize=(4.5,2.7))
plt.title("Generative network predictions", fontsize=14)
vis.plot(t[~m], flux_gap[~m], a=0.5)
vis.plot(t[m], flux_gap[m], c="orange", a=1)
# plt.xticks([]), plt.yticks([])
plt.xticks(fontsize=13), plt.yticks(fontsize=13)
plt.ylabel("Flux", fontsize=14), plt.xlabel("Time (days)", fontsize=14)
cs = [plt.plot([])[0].get_color() for i in range(8)]
plt.plot([],c=cs[3],label="left-to-right", linewidth=2)
plt.plot([],c=cs[2],label="right-to-left", linewidth=2)
vis.plot(t, fix_pred(preds[0]) , scatter=0, c=cs[3], s=2, a=.7)
vis.plot(t, fix_pred(preds[1]), scatter=0, c=cs[2], s=2, a=1 )
plt.ylim(0.983,1.006)
plt.legend(fontsize=12)
plt.tight_layout()
plt.savefig("figures/generative-rnn_predictions.pdf")
plt.show()
np.tan(4), np.tan(6 * np.arctan(4)), np.tan(6)
###Output
_____no_output_____
|
utilsToLearn/tensorflow/tensorflowReader.ipynb
|
###Markdown
* prepare all the things
###Code
tf.reset_default_graph()
# set any dir you want ,but it should has some images =-=
IMAGEPATH = "/home/breakpoint/software/caffe/data/flickr_style/images/*.jpg"
def preprocess(image,
height,
width):
# Image processing for training the network. Note the many random
# distortions applied to the image.
# key , image = read_images(filename_queue)
reshaped_image =tf.cast(image,tf.float32)
# Randomly crop a [height, width] section of the image.
distorted_image = tf.random_crop(reshaped_image, [height, width, 3])
# Randomly flip the image horizontally.
distorted_image = tf.image.random_flip_left_right(distorted_image)
# Because these operations are not commutative, consider randomizing
# the order their operation.
distorted_image = tf.image.random_brightness(distorted_image,
max_delta=63)
distorted_image = tf.image.random_contrast(distorted_image,
lower=0.2, upper=1.8)
# Subtract off the mean and divide by the variance of the pixels.
float_image = tf.image.per_image_whitening(distorted_image)
return float_image
def read_images(filename_queue,
height,width,
isFixdLength=False,
isProcess=False):
if(not isFixdLength):
image_reader = tf.WholeFileReader()
else:
print('not suppported now')
key,image_file = image_reader.read(filename_queue)
# encode image according to the type of image
# here encode jpg for example
# here [height,width,depth]
image = tf.image.decode_jpeg(image_file,channels=3)
image = tf.cast(image,tf.uint8)
# image_bytes = tf.decode_raw(image_file,out_type=tf.uint8)
# image = tf.reshape(image_bytes,[3,])
if(isProcess):
processed_image = preprocess(image,
height=height,
width=width)
return key,processed_image
return key , image
def generate_image(path2image,batch_size=None,isProcess=False):
# filenames' queue of images
filename_queue = tf.train.string_input_producer(
tf.train.match_filenames_once(path2image))
# default setting
height = 32
width = 32
# choose if it should be preprocessed
key,float_image = read_images(filename_queue,
height=height,
width=width,
isProcess=isProcess)
if(batch_size):
if(not isProcess):
float_image = tf.random_crop(float_image, [height, width, 3])
# some arguements to set
min_after_dequeue = 128
capacity = min_after_dequeue + 3 * batch_size
num_preprocess_threads = 3
image_batch = tf.train.shuffle_batch(
[float_image],
batch_size=batch_size,
capacity=capacity,
num_threads=num_preprocess_threads,
min_after_dequeue=min_after_dequeue)
return key,image_batch
return key,float_image
# float_image_batch_tensor :Though it may be 4D or 3D ,All is OK
# here the code format is too bad , I admit.....
def display_image(float_image_batch_tensor,max_display=5,batch_size=None):
print(float_image_batch_tensor.shape)
print('display some images')
if(not batch_size):
# print('display image:%s'%image_name)
print(float_image_batch_tensor)
# note here image is not batched
uint8_image_batch_tensor = float_image_batch_tensor.astype(np.uint8)
if(batch_size==None):
plt.imshow(Image.fromarray(uint8_image_batch_tensor))
return
if(batch_size>max_display):
print('too much to display all')
else:
max_display = batch_size
for i in range(max_display):
plt.subplot(int('1'+str(max_display)+str(i+1)))
plt.axis('off')
plt.imshow(Image.fromarray(uint8_image_batch_tensor[i]))
def run_test(path2image,
batch_size=None,
debug=True,
isProcess=False):
# run
with tf.Graph().as_default():
key,float_image_batch = generate_image(path2image,
batch_size=batch_size,
isProcess=isProcess)
# image_shape = tf.shape(image_batch)
with tf.Session() as sess:
# init must be placed below other construction of the graph
init = tf.initialize_all_variables()
init.run()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
image_name,float_image_batch_tensor = sess.run([key,float_image_batch])
# shape_tensor = sess.run([image_shape])
# print(shape_tensor)
# if it is should show sth for debugging
if(debug):
display_image(float_image_batch_tensor,5,batch_size)
# reduce threads
coord.request_stop()
coord.join(threads)
# test function listed before
run_test(IMAGEPATH,
isProcess=False)
###Output
(375, 500, 3)
display some images
[[[164 140 102]
[152 134 88]
[161 140 93]
...,
[150 122 85]
[153 123 85]
[154 122 84]]
[[169 149 116]
[159 136 92]
[162 144 104]
...,
[144 113 67]
[150 117 72]
[142 109 64]]
[[161 143 97]
[146 127 85]
[154 130 94]
...,
[140 103 58]
[146 111 69]
[154 121 80]]
...,
[[120 120 110]
[129 129 117]
[138 138 126]
...,
[118 113 91]
[122 116 94]
[108 100 79]]
[[156 145 139]
[143 132 126]
[160 151 144]
...,
[104 99 79]
[111 104 86]
[108 101 83]]
[[174 180 168]
[168 172 158]
[181 183 170]
...,
[106 101 81]
[110 103 84]
[104 97 79]]]
|
Tutorials/Udacity/5_word2vec.ipynb
|
###Markdown
Deep Learning=============Assignment 5------------The goal of this assignment is to train a Word2Vec skip-gram model over [Text8](http://mattmahoney.net/dc/textdata) data.
###Code
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import print_function
import collections
import math
import numpy as np
import os
import random
import tensorflow as tf
import zipfile
from matplotlib import pylab
from six.moves import range
from six.moves.urllib.request import urlretrieve
from sklearn.manifold import TSNE
###Output
_____no_output_____
###Markdown
[collections.Counter](http://pymbook.readthedocs.io/en/latest/collections.html) [collections.deque](https://pythontips.com/2014/07/02/an-intro-to-deque-module/)[zipfile](https://docs.python.org/3.3/library/zipfile.html) Download the data from the source website if necessary.
###Code
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
"""Download a file if not present, and make sure it's the right size."""
if not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified %s' % filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('text8.zip', 31344016)
###Output
Found and verified text8.zip
###Markdown
Read the data into a string.
###Code
def read_data(filename):
"""Extract the first file enclosed in a zip file as a list of words"""
with zipfile.ZipFile(filename) as f:
# f.namelist() return a list of the names of the files in the zip file.
data = tf.compat.as_str(f.read(f.namelist()[0])).split() # tf.compat.as_str() convert input as a string
# string.split() return a list of splitted strings by space (not including space)
return data
words = read_data(filename)
print('Data size %d' % len(words))
###Output
Data size 17005207
###Markdown
Build the dictionary and replace rare words with UNK (unknown) token.rare words mean the words that don't show up frequently in the dataset. (<50000)
###Code
vocabulary_size = 50000
def build_dataset(words):
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
# Counter(words).most_common(num) return a list that contains num many tuples that are the most num common vocabulary
# in words. The tuples have the structure of ('word',times)
dictionary = dict() # empty dictionary; dict is a class in python; dictionary = {}
for word, _ in count:
dictionary[word] = len(dictionary)
# since the words in count are ordered from the most common to the least, the len(dictionary) is the words' rank
data = list() # data = []
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else: # the words that are not belong to the most 50000 common
index = 0 # dictionary['UNK']
unk_count = unk_count + 1
data.append(index)
# data is a list that maps all the words shown-up times.
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
# dictionary.values() return a seq of values from the dictionary.
# zip method, see below.
# reverse_dictionary: the values and keys are reversed.
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10])
del words # Hint to reduce memory.
###Output
Most common words (+UNK) [['UNK', 418391], ('the', 1061396), ('of', 593677), ('and', 416629), ('one', 411764)]
Sample data [5241, 3081, 12, 6, 195, 2, 3135, 46, 59, 156]
###Markdown
[zip() method](https://bradmontgomery.net/blog/pythons-zip-map-and-lambda/): This function takes two equal-length collections, and merges them together in pairs. Function to generate a training batch for the skip-gram model.
###Code
data_index = 0
# big window: skip_window word skip_window
# batch_size:8, the size of batch, batch is a batch of words from which we choose words to train.
# Here we choose batch_size%num_skips many words from 8 words (batch_size).
# num_skips:2(4), one word has num_skips many targets. Or one word would be reused for num_skip times. And batch_size//num_skips should equate 0.
# skip_window:1(2), the number of words on the left or right of the word. Also skip_window is the index of central word in a big window.
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target is at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid: # make sure target is not the central word
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
print('data:', [reverse_dictionary[di] for di in data[:8]]) # 8 sequential words in a sentense
for num_skips, skip_window in [(2, 1), (4, 2)]:
data_index = 0 # each time generate batch from 0
batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window)
print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
print(' batch:', [reverse_dictionary[bi] for bi in batch])
print(' labels:', [reverse_dictionary[li] for li in labels.reshape(8)])
###Output
data: ['anarchism', 'originated', 'as', 'a', 'term', 'of', 'abuse', 'first']
with num_skips = 2 and skip_window = 1:
batch: ['originated', 'originated', 'as', 'as', 'a', 'a', 'term', 'term']
labels: ['as', 'anarchism', 'a', 'originated', 'as', 'term', 'of', 'a']
with num_skips = 4 and skip_window = 2:
batch: ['as', 'as', 'as', 'as', 'a', 'a', 'a', 'a']
labels: ['a', 'originated', 'anarchism', 'term', 'as', 'term', 'of', 'originated']
###Markdown
Train a skip-gram model.
###Code
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left or right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation dataset to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # How many random set of words to evaluate similarity on.
valid_window = 100 # From how many words that validation samples are picked in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample in sampled softmax
# graph: words -> embeddings -> words
# 1st layer: tf.nn.embedding_lookup
# 2nd layer: tf.nn.sampled_softmax_loss
graph = tf.Graph()
with graph.as_default(), tf.device('/cpu:0'):
# Input data.
train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
# the rule to map word to embedding vectors is given by tf.embedding_lookup(), and it's a 1to1 rule.
# but the value of embedding vectors is initialized randomly.
# tf.random_uniform(shape,minvalue,maxvalue)
# we have vocabulary_size = 50000 words
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
# Note: The optimizer will optimize the softmax_weights AND the embeddings.
# This is because the embeddings are defined as a variable quantity and the
# optimizer's `minimize` method will by default modify all variable quantities
# that contribute to the tensor it is passed.
# See docs on `tf.train.Optimizer.minimize()` for more details.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
# norm(50000,1)
normalized_embeddings = embeddings / norm
# embedding/norm is qual to tf.div, which is element-wise operation
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
# valid_embeddings has the 16 words' vectors in embeddings.
# normalized_embeddings has 50000 words' vectors in embeddings.
# so the matrix multiplication of similarity is like the inner production of the embedding vectors of 16 words
# and ones of 50000 words.
# So it's like find the similarity of the embedding vectors of 16 words from 50000 words.
# inner product: A.B = |A|*|B|*cos(A,B)
print(similarity.get_shape())
print(valid_embeddings.get_shape())
print(normalized_embeddings.get_shape())
train_dataset
batch_data, batch_labels = generate_batch(batch_size, num_skips, skip_window)
type(batch_data)
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
# batch_data is the ranks, ranks are also considers as the IDs of the words
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
#print(type(sim)) #-> numpy.ndarray
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
# sim[i,:] the similaries of i-th word from the 16 words and every word from 50000 words.
# argsort() get the sorted ndarray and return the indices
# [1:top_k+1] avoids the word itself as the highest score
# indices are the ranks of the frequency in data
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
# recursively add close_word to log
print(log)
final_embeddings = normalized_embeddings.eval()
###Output
Initialized
Average loss at step 0: 7.912311
Nearest to however: shipwreck, ferus, walks, lakoff, propagate, wim, signature, sap,
Nearest to after: subordinate, embarrassment, supranational, malpractice, permanence, seabirds, arcadius, variste,
Nearest to about: overrun, whimsical, acquires, monorails, cyprian, goldsmiths, suppliers, dominions,
Nearest to they: despotism, durability, entails, basketball, unhappy, usaid, airplay, hummel,
Nearest to not: asserted, qadhafi, herndon, advance, sucker, neberg, limited, rajonas,
Nearest to zero: lata, lumpur, killian, brahmin, agitation, isn, paradiso, unbelief,
Nearest to it: telekom, chrono, neoclassical, hymnody, classicists, lemurs, constructed, gutierrez,
Nearest to i: disadvantageous, songwriting, glick, specialise, colombo, necromancer, gardnerian, winnie,
Nearest to with: finish, bead, cyanides, discussed, mill, dartmouth, nosferatu, prohibitions,
Nearest to only: cheats, bane, handhelds, hurry, claudian, lazuli, emergence, telstra,
Nearest to years: wages, confirmed, groupoid, choice, calculates, balaguer, alto, intron,
Nearest to no: snowflake, iverson, surfers, dijon, cursing, museum, xxxiii, announced,
Nearest to into: burdens, delta, logan, greenish, fantasia, lowell, pomona, wavell,
Nearest to had: scrolling, freestyle, geri, heteronyms, creeds, jana, jacketed, curvature,
Nearest to six: worshippers, tyndall, navigable, fractures, hypothesizing, deficiency, mehmet, sidecar,
Nearest to other: fermionic, domitia, deleted, enduring, nightfall, umi, hellish, garbage,
Average loss at step 2000: 4.360159
Average loss at step 4000: 3.865892
Average loss at step 6000: 3.790551
Average loss at step 8000: 3.681730
Average loss at step 10000: 3.608492
Nearest to however: propagate, shipwreck, tfl, solicit, ni, but, darmstadt, ferus,
Nearest to after: shamash, supranational, canned, before, coincidental, cob, seabirds, vaud,
Nearest to about: cyprian, acquires, overrun, layoffs, woensel, dion, availability, aceh,
Nearest to they: he, we, despotism, who, lech, she, there, tile,
Nearest to not: it, also, mozart, etiology, capra, atrophy, zaman, rajonas,
Nearest to zero: nine, five, seven, eight, six, three, four, two,
Nearest to it: he, this, there, which, not, she, feodor, you,
Nearest to i: songwriting, disadvantageous, glick, freight, djing, el, coolant, necromancer,
Nearest to with: in, between, rios, of, by, for, respective, on,
Nearest to only: bane, cheats, hoard, games, pesky, yaum, claudian, icmp,
Nearest to years: alcal, incredibly, ace, urquhart, confirmed, renarrative, kbit, female,
Nearest to no: snowflake, recapture, overman, iverson, nitro, alienating, prescribe, announced,
Nearest to into: burdens, from, delta, affluence, logan, single, fantasia, censorware,
Nearest to had: has, have, was, introduced, discriminatory, reassigned, were, resume,
Nearest to six: five, seven, eight, four, three, nine, two, zero,
Nearest to other: fermionic, malice, cond, enduring, flugelhorn, bhangra, stead, anthropology,
Average loss at step 12000: 3.608212
Average loss at step 14000: 3.569944
Average loss at step 16000: 3.409416
Average loss at step 18000: 3.456980
Average loss at step 20000: 3.544935
Nearest to however: but, propagate, tfl, darmstadt, shipwreck, karamanlis, let, solicit,
Nearest to after: shamash, before, when, during, from, supranational, for, canned,
Nearest to about: ioan, cyprian, phimosis, ecumenical, acquires, damp, monorails, dhea,
Nearest to they: he, we, there, who, despotism, you, it, she,
Nearest to not: also, it, they, etiology, capra, replenished, strongly, there,
Nearest to zero: five, seven, six, three, four, eight, nine, two,
Nearest to it: he, this, there, she, which, they, you, not,
Nearest to i: ii, songwriting, we, cm, ciii, prentice, disadvantageous, they,
Nearest to with: between, in, into, fourteenth, for, respective, melchior, by,
Nearest to only: cheats, conversational, presque, falsificationism, centimeter, sandro, really, games,
Nearest to years: days, kbit, urquhart, incredibly, alcal, cheerful, till, profession,
Nearest to no: iverson, snowflake, alienating, funk, recapture, amides, susima, generally,
Nearest to into: from, affluence, single, through, with, burdens, delta, logan,
Nearest to had: has, have, was, were, when, pls, having, agnostic,
Nearest to six: eight, four, seven, nine, five, three, zero, two,
Nearest to other: many, some, stead, fermionic, carlos, carolyn, malice, are,
Average loss at step 22000: 3.498957
Average loss at step 24000: 3.491244
Average loss at step 26000: 3.485455
Average loss at step 28000: 3.480572
Average loss at step 30000: 3.502394
Nearest to however: but, propagate, where, they, tfl, darmstadt, although, that,
Nearest to after: before, during, when, shamash, until, for, from, disavowed,
Nearest to about: cyprian, ioan, acquires, relocate, inside, ulrike, attu, euskal,
Nearest to they: we, he, there, who, it, you, she, not,
Nearest to not: they, probably, there, it, still, this, atrophy, capra,
Nearest to zero: five, seven, eight, six, four, three, nine, two,
Nearest to it: he, she, there, this, they, which, also, not,
Nearest to i: ii, we, cm, you, songwriting, el, iii, they,
Nearest to with: between, wellesley, in, by, when, respective, including, for,
Nearest to only: games, tanakh, savanna, gotlanders, grandsons, gollum, really, anderson,
Nearest to years: days, months, kbit, urquhart, times, cheerful, kyoto, bengals,
Nearest to no: any, amides, iverson, it, snowflake, gv, alienating, a,
Nearest to into: from, through, affluence, logan, in, with, rbis, burdens,
Nearest to had: has, have, was, were, having, altruists, since, is,
Nearest to six: eight, four, seven, nine, five, three, two, zero,
Nearest to other: various, melinda, many, those, some, such, are, nylon,
Average loss at step 32000: 3.500690
Average loss at step 34000: 3.496553
Average loss at step 36000: 3.455496
Average loss at step 38000: 3.305411
Average loss at step 40000: 3.425501
Nearest to however: but, propagate, that, though, although, they, kettering, darmstadt,
Nearest to after: before, shamash, during, viscous, when, censorial, from, cob,
Nearest to about: ioan, acquires, attu, relocate, antagonist, ulrike, monorails, cyprian,
Nearest to they: we, he, you, there, it, not, she, i,
Nearest to not: they, it, still, probably, often, vassar, widely, capra,
Nearest to zero: seven, five, eight, two, six, nine, three, four,
Nearest to it: he, she, there, this, they, still, which, not,
Nearest to i: ii, we, you, cm, they, he, t, terrier,
Nearest to with: between, rios, by, semi, clavell, when, fairs, fourteenth,
Nearest to only: savanna, grandsons, gotlanders, carmelite, vp, gollum, hygiene, really,
Nearest to years: days, months, times, urquhart, kbit, jewishencyclopedia, kyoto, cheerful,
Nearest to no: any, amides, nitro, susima, snowflake, clanking, imparted, iverson,
Nearest to into: from, through, logan, back, affluence, delta, rbis, within,
Nearest to had: has, have, was, were, having, since, been, ferruccio,
Nearest to six: seven, eight, four, five, nine, three, two, one,
Nearest to other: various, those, some, hunting, gaeltacht, individualists, enormous, melinda,
Average loss at step 42000: 3.435364
Average loss at step 44000: 3.447884
Average loss at step 46000: 3.453778
Average loss at step 48000: 3.350678
Average loss at step 50000: 3.379875
Nearest to however: but, although, though, that, when, while, where, which,
Nearest to after: before, when, during, while, shamash, for, if, loathing,
Nearest to about: ulrike, relocate, antagonist, gheg, acquires, ioan, whole, bia,
Nearest to they: he, we, there, you, she, it, who, not,
Nearest to not: they, still, vassar, now, generally, subgroups, who, atrophy,
Nearest to zero: eight, seven, five, six, four, three, two, nine,
Nearest to it: he, she, there, this, they, still, now, promotes,
Nearest to i: we, ii, you, cm, they, t, tansley, terrier,
Nearest to with: between, fourteenth, darya, wellesley, clavell, against, hygienic, while,
Nearest to only: really, carmelite, always, savanna, grandsons, lip, radially, scientifically,
Nearest to years: days, months, times, ways, kbit, centuries, urquhart, attract,
Nearest to no: any, peabody, gv, amides, nothing, alienating, susima, quantify,
Nearest to into: through, from, back, logan, within, across, around, delta,
Nearest to had: has, have, was, were, having, been, since, sens,
Nearest to six: eight, seven, four, five, nine, three, two, zero,
Nearest to other: various, many, different, some, hunting, those, including, malice,
Average loss at step 52000: 3.437261
Average loss at step 54000: 3.425769
Average loss at step 56000: 3.436059
Average loss at step 58000: 3.398487
Average loss at step 60000: 3.395781
Nearest to however: but, although, though, which, that, when, despite, while,
Nearest to after: before, when, during, shamash, without, while, despite, viscous,
Nearest to about: ulrike, relocate, ioan, antagonist, over, gheg, whole, coronary,
Nearest to they: we, there, you, he, she, i, it, cumbria,
Nearest to not: still, now, probably, atrophy, they, who, usually, we,
Nearest to zero: five, seven, four, six, eight, three, nine, two,
Nearest to it: he, she, there, this, which, still, they, what,
Nearest to i: we, ii, you, t, cm, they, tansley, iii,
Nearest to with: between, fourteenth, into, while, inelastic, when, wellesley, payoffs,
Nearest to only: really, always, first, journeyman, scientifically, grandsons, lip, pontus,
Nearest to years: days, months, times, centuries, decades, minutes, year, urquhart,
Nearest to no: any, peabody, a, gv, nothing, alienating, quantify, cognates,
Nearest to into: from, through, within, logan, across, back, with, around,
Nearest to had: has, have, was, were, having, been, never, subsequently,
Nearest to six: eight, four, five, nine, seven, three, zero, one,
Nearest to other: various, different, many, some, those, hunting, yak, more,
Average loss at step 62000: 3.243026
Average loss at step 64000: 3.256715
Average loss at step 66000: 3.398159
Average loss at step 68000: 3.397893
Average loss at step 70000: 3.356833
Nearest to however: but, although, though, that, when, where, while, which,
Nearest to after: before, during, when, while, viscous, without, mauryan, until,
Nearest to about: ulrike, relocate, asparagus, over, antagonist, approximately, transpired, remedied,
Nearest to they: we, there, he, you, she, it, diatomaceous, cumbria,
Nearest to not: still, now, generally, probably, never, usually, frequently, also,
Nearest to zero: five, six, four, eight, seven, two, three, nine,
Nearest to it: he, she, there, this, they, still, samsara, which,
Nearest to i: we, ii, you, cm, tansley, licking, t, bacall,
Nearest to with: between, wellesley, including, fourteenth, while, into, when, without,
Nearest to only: always, exactly, grandsons, really, never, lip, not, avercamp,
Nearest to years: days, months, decades, centuries, times, minutes, year, urquhart,
Nearest to no: any, significant, quantify, peabody, funk, than, cognates, periodically,
Nearest to into: from, through, within, logan, with, across, back, around,
Nearest to had: has, have, was, were, having, been, pls, recently,
Nearest to six: eight, seven, nine, four, five, three, two, zero,
Nearest to other: various, different, many, benzene, including, spur, some, hunting,
Average loss at step 72000: 3.371886
Average loss at step 74000: 3.346062
Average loss at step 76000: 3.315682
Average loss at step 78000: 3.349079
Average loss at step 80000: 3.376295
Nearest to however: although, but, that, though, while, when, where, they,
Nearest to after: before, when, during, without, while, until, despite, viscous,
Nearest to about: approximately, ulrike, coolidge, over, remedied, relocate, least, antagonist,
Nearest to they: we, he, you, there, she, it, cumbria, these,
Nearest to not: still, now, generally, usually, it, we, vassar, probably,
Nearest to zero: five, seven, four, six, eight, three, nine, two,
Nearest to it: he, she, there, this, they, we, still, itself,
Nearest to i: ii, we, you, iii, tansley, t, cm, iv,
Nearest to with: between, wellesley, payoffs, in, fourteenth, when, including, into,
Nearest to only: grandsons, always, best, exactly, lip, really, avercamp, savanna,
Nearest to years: days, months, decades, minutes, times, centuries, year, ways,
Nearest to no: any, peabody, quantify, nothing, humanist, alienating, little, significant,
Nearest to into: through, from, within, across, logan, back, during, with,
Nearest to had: have, has, was, were, having, been, fled, never,
Nearest to six: eight, four, seven, five, three, nine, two, zero,
Nearest to other: various, hunting, others, potent, including, different, many, some,
Average loss at step 82000: 3.405456
Average loss at step 84000: 3.411629
Average loss at step 86000: 3.387811
Average loss at step 88000: 3.354794
Average loss at step 90000: 3.360676
Nearest to however: but, although, though, that, while, when, insufficiently, where,
Nearest to after: before, during, when, while, without, until, despite, from,
Nearest to about: coolidge, antagonist, ulrike, relocate, over, around, regarding, asparagus,
Nearest to they: we, you, he, there, she, it, but, cumbria,
Nearest to not: still, strongly, now, we, nor, grotto, vassar, belgrano,
Nearest to zero: five, eight, six, seven, four, two, three, nine,
Nearest to it: he, she, there, they, this, therefore, itself, often,
Nearest to i: ii, we, you, t, iii, newman, tansley, iv,
Nearest to with: between, in, by, including, wellesley, wet, into, fourteenth,
Nearest to only: grandsons, always, really, either, exactly, lip, avercamp, no,
Nearest to years: days, months, decades, minutes, centuries, year, times, hours,
Nearest to no: any, peabody, little, nothing, significant, quantify, cognates, periodically,
Nearest to into: through, from, across, within, around, back, logan, during,
Nearest to had: has, have, was, having, were, decided, adhered, fled,
Nearest to six: eight, seven, five, four, nine, three, two, zero,
Nearest to other: various, individual, others, potent, hunting, genevieve, different, including,
Average loss at step 92000: 3.398688
Average loss at step 94000: 3.256711
Average loss at step 96000: 3.356798
Average loss at step 98000: 3.242885
Average loss at step 100000: 3.357523
Nearest to however: although, but, though, that, where, when, and, which,
Nearest to after: before, when, during, without, while, despite, until, loathing,
Nearest to about: relocate, ulrike, around, energetic, approximately, on, nicopolis, coronary,
Nearest to they: we, he, there, you, she, it, i, cumbria,
Nearest to not: still, never, now, generally, strongly, nor, actually, also,
Nearest to zero: five, six, eight, four, seven, two, nine, three,
Nearest to it: he, she, this, there, they, bolland, which, often,
Nearest to i: we, you, ii, iii, they, t, tansley, newman,
Nearest to with: between, including, into, using, makes, in, fourteenth, within,
Nearest to only: really, still, ves, lip, exactly, carmelite, stemmed, storm,
Nearest to years: days, months, decades, centuries, minutes, year, times, weeks,
Nearest to no: any, peabody, nothing, lip, quantify, cognates, significant, taxed,
Nearest to into: through, from, within, across, in, back, logan, during,
Nearest to had: has, have, was, having, were, since, attempted, fled,
Nearest to six: seven, four, eight, nine, five, three, two, zero,
Nearest to other: various, others, hunting, honors, including, genevieve, individual, spur,
###Markdown
[Tensorflow Word2vect Tutorial](https://www.tensorflow.org/versions/r0.10/tutorials/word2vec/index.html)[sklearn.manifold.TSNE](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.htmlsklearn.manifold.TSNE.fit_transform)[pyplot](http://matplotlib.org/api/pyplot_api.htmlmatplotlib.pyplot)
###Code
num_points = 400
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
def plot(embeddings, labels):
assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
pylab.figure(figsize=(15,15)) # in inches
for i, label in enumerate(labels):
x, y = embeddings[i,:]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
words = [reverse_dictionary[i] for i in range(1, num_points+1)]
plot(two_d_embeddings, words)
###Output
/usr/lib/python3/dist-packages/matplotlib/collections.py:549: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
if self._edgecolors == 'face':
###Markdown
---Problem-------An alternative to skip-gram is another Word2Vec model called [CBOW](http://arxiv.org/abs/1301.3781) (Continuous Bag of Words). In the CBOW model, instead of predicting a context word from a word vector, you predict a word from the sum of all the word vectors in its context. Implement and evaluate a CBOW model trained on the text8 dataset.--- CBOW Model
###Code
#original data: [word1, word2, word3, word4,], let's say word3 is UNK.
#data:[rank_of_word1, rank_of_word2, 0_for_UNK, rank_of_word4,]
#count:[['UNK',frequency_of_UNK],['most_frequent_word',frequency],['second_most_frequent_word',frequency],]
#dictionary:{'UNK':0,'most_frequent_word':1,'word':rank,}
#reverse_dictionary
data_index = 0
def stuff_buffer(window_size):
global data_index
buffer = collections.deque(maxlen=window_size)
for _ in range(window_size):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return buffer
def CBOW_generate_batch(batch_size, window_size):
assert batch_size % window_size == 0
batch_input = list()
batch_target = list()
input_nm = list()
for _ in range(batch_size//window_size):
buffer = stuff_buffer(window_size)
for _ in range(window_size):
input_size = random.randint(2,window_size-1)
input_nm.append(input_size)
# at least use two words to predict the third one.
# at most use window_size-1 words to predict the rest one
input_pos = random.randint(0,window_size-(input_size+1))
# input_size+1 as a block, then from it choose the target
target_pos = random.randint(input_pos,input_pos+input_size)
for i in range(input_pos,input_pos+(input_size+1)):
if i == target_pos:
batch_target.append(buffer[target_pos])
else:
batch_input.append(buffer[i])
return batch_input, batch_target, input_nm
def CBOW_generate_batch2(batch_size, window_size):
input_ = np.ndarray(shape=(batch_size*(window_size-1)), dtype=np.int32)
target_ = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
batch_input = list()
batch_target = list()
block_pos = 0
for _ in range(batch_size):
#print('block_pos',block_pos)
buffer = stuff_buffer(window_size)
target_pos = random.randint(0,window_size-1)
#print('target_pos',target_pos)
for i in range(block_pos,block_pos+window_size):
#print('i%window_size',i%window_size)
if target_pos == i%window_size:
batch_target.append(buffer[target_pos])
#print('add target')
else:
batch_input.append(buffer[i%window_size])
#print('add input')
block_pos += window_size
#print('--------------')
input_ = np.asarray(batch_input)
target_ = np.asarray(batch_target).reshape([-1,1])
return input_, target_
for batch_size, window_size in [(3,3)]:
#input_, target_, nm_ = CBOW_generate_batch(8,4)
input_, target_ = CBOW_generate_batch2(batch_size,window_size)
input_d = [reverse_dictionary[rank] for rank in input_]
target_d = [reverse_dictionary[rank] for rank in target_.reshape(3)]
print('data: ',[reverse_dictionary[rank] for rank in data[:9]])
print('input: ',input_d)
print('target: ',target_d)
#print('number of input block:', nm_)
###Output
data: ['anarchism', 'originated', 'as', 'a', 'term', 'of', 'abuse', 'first', 'used']
input: ['anarchism', 'originated', 'a', 'of', 'abuse', 'first']
target: ['as', 'term', 'used']
###Markdown
train a CBOW model
###Code
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
window_size = 4
valid_size = 16 # How many random set of words to evaluate similarity on.
valid_window = 100 # From how many words that validation samples are picked in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample in sampled softmax
# graph: words -> embeddings -> words
# 1st layer: tf.nn.embedding_lookup
# 2nd layer: fully connected layer
# 2nd layer: tf.nn.sampled_softmax_loss. The above FC is contained in this method.
graph = tf.Graph()
with graph.as_default(), tf.device('/cpu:0'):
# Input data.
input_dataset = tf.placeholder(tf.int32,shape=[batch_size*(window_size-1)])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
# the rule to map word to embedding vectors is given by tf.embedding_lookup(), and it's a 1to1 rule.
# but the value of embedding vectors is initialized randomly.
# tf.random_uniform(shape,minvalue,maxvalue)
# we have vocabulary_size = 50000 words
softmax_weights = tf.Variable(tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
train_dataset = tf.Variable(tf.zeros([batch_size]))
# Model.
# Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, input_dataset)
matrix_embed = tf.reshape(embed,[batch_size,window_size-1,embedding_size])
input_embed = tf.reduce_sum(matrix_embed,1)/(window_size-1)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, input_embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
# Note: The optimizer will optimize the softmax_weights AND the embeddings.
# This is because the embeddings are defined as a variable quantity and the
# optimizer's `minimize` method will by default modify all variable quantities
# that contribute to the tensor it is passed.
# See docs on `tf.train.Optimizer.minimize()` for more details.
optimizer = tf.train.AdagradOptimizer(0.5).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
# norm(50000,1)
normalized_embeddings = embeddings / norm
# embedding/norm is qual to tf.div, which is element-wise operation
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
# valid_embeddings has the 16 words' vectors in embeddings.
# normalized_embeddings has 50000 words' vectors in embeddings.
# so the matrix multiplication of similarity is like the inner production of the embedding vectors of 16 words
# and ones of 50000 words.
# So it's like find the similarity of the embedding vectors of 16 words from 50000 words.
# inner product: A.B = |A|*|B|*cos(A,B)
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
input_, target_ = CBOW_generate_batch2(batch_size,window_size)
#print(input_.shape)
#print(target_.shape)
feed_dict = {input_dataset:input_,train_labels:target_}
# batch_data is the ranks, ranks are also considers as the IDs of the words
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
#print(type(sim)) #-> numpy.ndarray
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
# sim[i,:] the similaries of i-th word from the 16 words and every word from 50000 words.
# argsort() get the sorted ndarray and return the indices
# [1:top_k+1] avoids the word itself as the highest score
# indices are the ranks of the frequency in data
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
# recursively add close_word to log
print(log)
final_embeddings = normalized_embeddings.eval()
num_points = 400
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
def plot(embeddings, labels):
assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
pylab.figure(figsize=(15,15)) # in inches
for i, label in enumerate(labels):
x, y = embeddings[i,:]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
words = [reverse_dictionary[i] for i in range(1, num_points+1)]
plot(two_d_embeddings, words)
###Output
/usr/lib/python3/dist-packages/matplotlib/collections.py:549: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
if self._edgecolors == 'face':
|
federated_averaging.ipynb
|
###Markdown
Example Experiment Dataset: Labeled Faces in the Wild Experiment: Two party training for gender classification
###Code
import cl_simulator.server as server
import cl_simulator.workerclass as worker
import cl_simulator.workerhandler as wh
import cl_simulator.topology_utils as tu
from collections import OrderedDict
import pandas as pd
import numpy as np
import copy
import os
from PIL import Image
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook
%matplotlib inline
import torch
import torchvision
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
import torch.optim as optim
from torchvision import transforms
import torch.nn.functional as F
from torch.utils.tensorboard import SummaryWriter
###Output
_____no_output_____
###Markdown
Parameters
###Code
epochs = 20
# epochs = 2
batch_size = 64
learning_rate = 0.001
server_learning_rate = 0.05
num_workers = 2
local_iterations = 2
data_path = '../data/lfw/data/'
# default `log_dir` is "runs" - we'll be more specific here
!rm -rf ./runs/experiment_2
writer = SummaryWriter('runs/experiment_2')
###Output
_____no_output_____
###Markdown
Divide Data between workers
###Code
attributes_df = pd.read_csv(data_path+'lfw_attributes.txt')
all_names = attributes_df.person.unique()
tt_msk = np.random.rand(len(all_names)) < 0.8
temp_train_names = all_names[tt_msk]
test_names = all_names[~tt_msk]
del all_names, tt_msk
train_val_df = attributes_df.loc[attributes_df['person'].isin(temp_train_names)]
test_df = attributes_df.loc[attributes_df['person'].isin(test_names)]
# add column to indicate split
train_val_df['target'] = 0
# allocate half the people to the target
names = train_val_df['person'].drop_duplicates()
target_worker_names = names.sample(frac=1)[:int(len(names)/2)]
target_worker_names = target_worker_names.reset_index(drop=True)
# populate target field
for index, row in train_val_df.iterrows():
if row['person'] in target_worker_names.values:
train_val_df['target'][index] = 1
# print distribution of data
print("entries with worker 1: {}, entries with worker 2: {}, entries in training set: {}, total entries: {}".format(sum(train_val_df['target']==1), sum(train_val_df['target']==0), len(test_df), len(attributes_df)))
###Output
/home/sattvik/envs/pytorch_env/lib/python3.6/site-packages/ipykernel_launcher.py:12: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
if sys.path[0] == '':
/home/sattvik/envs/pytorch_env/lib/python3.6/site-packages/ipykernel_launcher.py:21: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
/home/sattvik/envs/pytorch_env/lib/python3.6/site-packages/IPython/core/interactiveshell.py:3326: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
exec(code_obj, self.user_global_ns, self.user_ns)
###Markdown
Define dataset class
###Code
class LFWDataset(Dataset):
"""LFW dataset."""
def __init__(self, data_path, attributes_df, transform=None):
self.attributes_df = attributes_df
self.data_path = data_path
self.transform = transform
def __len__(self):
return len(self.attributes_df)
def __getitem__(self, idx):
img_path = os.path.join(self.data_path, "lfw_home/lfw_funneled", self.attributes_df.iloc[idx]['person'].replace(' ', '_'),"{}_{:04d}.jpg".format(self.attributes_df.iloc[idx]['person'].replace(' ', '_'),self.attributes_df.iloc[idx]['imagenum']))
# img = torch.from_numpy(cv2.imread(img_path))
img = Image.open(img_path, mode='r')
label = self.attributes_df.iloc[idx]['Male']>0
if self.transform:
img = self.transform(img)
return img, torch.tensor(label, dtype=torch.float)
###Output
_____no_output_____
###Markdown
Define Model
###Code
class ResNet(nn.Module):
def __init__(self):
super(ResNet, self).__init__()
self.conv1 = nn.Conv2d(3, 8, 5)
self.pool1 = nn.MaxPool2d(4,4)
self.conv2 = nn.Conv2d(8, 16, 5)
self.pool2 = nn.MaxPool2d(4,4)
self.conv3 = nn.Conv2d(16, 32, 5)
self.pool3 = nn.MaxPool2d(2,2)
# self.conv4 = nn.Conv2d(32, 32, 5)
self.fc1 = nn.Linear(800, 512)
self.fc2 = nn.Linear(512, 64)
self.fc3 = nn.Linear(64, 1)
self.dropout_layer1 = nn.Dropout(p=0.6)
self.dropout_layer2 = nn.Dropout(p=0.5)
# self.dropout_layer3 = nn.Dropout(p=0.2)
def forward(self, x):
x = self.pool1(F.relu(self.conv1(x)))
x = self.pool2(F.relu(self.conv2(x)))
x = self.pool3(F.relu(self.conv3(x)))
# x = F.relu(self.conv4(x))
x = x.view(x.shape[0],-1)
x = self.dropout_layer1(x)
x = F.relu(self.fc1(x))
x = self.dropout_layer2(x)
# x = self.dropout_layer3(x)
x = F.relu(self.fc2(x))
x = F.sigmoid(self.fc3(x))
return x
def define_model():
return ResNet()
###Output
_____no_output_____
###Markdown
Training, Validation, and Evaluation functions
###Code
def perform_evaluation(val_model, dataloader):
with torch.no_grad():
epoch_loss = 0
epoch_accuracy = 0
for batch_idx, (data, target) in tqdm_notebook(enumerate(dataloader), total=len(dataloader)):
# move data batch to GPU
data = data.cuda()
target = target.cuda()
# forward pass
output = val_model(data)
loss = F.binary_cross_entropy(output, target.unsqueeze(1))
# compute average loss an accuracy
output = output.to('cpu')
target = target.to('cpu')
current_acc = torch.tensor(((output>0.5)== torch.tensor(target.unsqueeze(1), dtype=torch.bool)).sum(), dtype=torch.float)/torch.tensor(len(target), dtype=torch.float)
epoch_loss = ((epoch_loss*batch_idx) + loss.item())/(batch_idx+1)
epoch_accuracy = ((epoch_accuracy*batch_idx) + current_acc.item())/(batch_idx+1)
print("testing loss: {} and testing accuracy: {}".format(epoch_loss, epoch_accuracy))
return epoch_loss, epoch_accuracy
def perform_validation(val_model, dataloader):
with torch.no_grad():
epoch_loss = 0
epoch_accuracy = 0
for batch_idx, (data, target) in tqdm_notebook(enumerate(dataloader), total=len(dataloader)):
# move data batch to GPU
data = data.cuda()
target = target.cuda()
# forward pass
output = val_model(data)
# print(output, target.unsqueeze(1))
loss = F.binary_cross_entropy(output, target.unsqueeze(1))
# compute average loss an accuracy
output = output.to('cpu')
target = target.to('cpu')
current_acc = torch.tensor(((output>0.5)== torch.tensor(target.unsqueeze(1), dtype=torch.bool)).sum(), dtype=torch.float)/torch.tensor(len(target), dtype=torch.float)
epoch_loss = ((epoch_loss*batch_idx) + loss.item())/(batch_idx+1)
epoch_accuracy = ((epoch_accuracy*batch_idx) + current_acc.item())/(batch_idx+1)
print("val loss: {} and val accuracy: {}".format(epoch_loss, epoch_accuracy))
return epoch_loss, epoch_accuracy
def perform_training(val_model, dataloader, optimizer):
epoch_loss = 0
epoch_accuracy = 0
for batch_idx, (data, target) in tqdm_notebook(enumerate(dataloader), total=len(dataloader)):
# move data batch to GPU
data = data.cuda()
target = target.cuda()
# zero the parameter gradients
optimizer.zero_grad()
# forward pass
output = val_model(data)
loss = F.binary_cross_entropy(output, target.unsqueeze(1))
# backward pass
loss.backward()
optimizer.step()
# compute average loss an accuracy
output = output.to('cpu')
target = target.to('cpu')
current_acc = torch.tensor(((output>0.5)== torch.tensor(target.unsqueeze(1), dtype=torch.bool)).sum(), dtype=torch.float)/torch.tensor(len(target), dtype=torch.float)
epoch_loss = ((epoch_loss*batch_idx) + loss.item())/(batch_idx+1)
epoch_accuracy = ((epoch_accuracy*batch_idx) + current_acc.item())/(batch_idx+1)
print("train loss: {} and train accuracy: {}".format(epoch_loss, epoch_accuracy))
return epoch_loss, epoch_accuracy
###Output
_____no_output_____
###Markdown
Declare genuine worker
###Code
class target_worker(worker.base_workerclass):
def __init__(self, name, attributes_df, model):
super().__init__(name, False)
self.worker_attributes_df = attributes_df[attributes_df['target']==1]
print("initializing genuine worker node with ",len(self.worker_attributes_df)," data points")
self.model = model
self.local_iters = local_iterations
# train val split
all_names = self.worker_attributes_df.person.unique()
tt_msk = np.random.rand(len(all_names)) < 0.8
train_names = all_names[tt_msk]
val_names = all_names[~tt_msk]
del all_names, tt_msk
# set optimizer
self.set_optim()
# create train val and test dataframes
train_df = self.worker_attributes_df.loc[self.worker_attributes_df['person'].isin(train_names)]
val_df = self.worker_attributes_df.loc[self.worker_attributes_df['person'].isin(val_names)]
train_dataset = LFWDataset(data_path, train_df, transform=transforms.Compose([
# transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor()
]))
val_dataset = LFWDataset(data_path, val_df, transform=transforms.Compose([
# transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor()]))
del train_df, val_df
self.train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
self.val_dataloader = DataLoader(val_dataset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
print(len(self.train_dataloader), len(self.val_dataloader))
def set_param(self, w):
self.model.load_state_dict(w)
def get_params(self):
return self.model.state_dict()
def set_optim(self):
self.optim = optim.Adam(self.model.parameters(), lr=learning_rate)
def client_update(self, global_epoch):
global writer
self.model = self.model.cuda()
prev_w = copy.deepcopy(self.model.state_dict())
# unfreeze layers
# if 5 == global_epoch:
# self.model.unfreeze_layer3()
# if 20 == global_epoch:
# self.model.unfreeze_layer2()
# if 50 == global_epoch:
# self.model.unfreeze_layer1()
for epoch in range(self.local_iters):
# run train and val epochs
print("sub-epoch: {}".format(epoch))
self.model.train()
train_loss, train_acc = perform_training(self.model, self.train_dataloader, self.optim)
writer.add_scalar('training loss_'+self.name, train_loss, (global_epoch*self.local_iters)+epoch)
writer.add_scalar('training accuracy_'+self.name, train_acc, (global_epoch*self.local_iters)+epoch)
self.model.eval()
val_loss, val_acc = perform_validation(self.model, self.val_dataloader)
writer.add_scalar('validation loss_'+self.name, val_loss, (global_epoch*self.local_iters)+epoch)
writer.add_scalar('validation accuracy_'+self.name, val_acc, (global_epoch*self.local_iters)+epoch)
graddif = OrderedDict()
for (item1, item2) in zip(self.model.state_dict().items(),prev_w.items()):
key1=item1[0]
value1=item1[1]
key2=item2[0]
value2=item2[1]
diffval = value1-value2
graddif.update({key1:diffval.cpu()})
self.model = self.model.cpu()
return graddif
###Output
_____no_output_____
###Markdown
Declare malicious worker
###Code
# class malicious_worker(worker.base_workerclass):
# def __init__(self, attributes_df, model):
# super().__init__(True)
# self.worker_attributes_df = attributes_df[attributes_df['target']==0]
# print("initializing malicious worker node with ",len(self.worker_attributes_df)," data points")
# self.model = model
# self.local_iters = 5
# def set_param(self, w):
# self.model.load_state_dict(w)
# def set_optim(self):
# self.optim = optim.Adam(self.model.parameters(), lr=learning_rate)
# def client_update(self):
# print('ss')
###Output
_____no_output_____
###Markdown
Initialize components of our simulations
###Code
server1 = server.server(server_learning_rate)
workers = wh.workerhandler([target_worker("w1", train_val_df,define_model()),target_worker("w2", train_val_df,define_model())])
tm = tu.topology_manager()
###Output
_____no_output_____
###Markdown
Define network topology
###Code
tm.connect_star(server1, workers.get_all_workers())
plot = tm.plot_topology()
###Output
_____no_output_____
###Markdown
Start Training
###Code
# initialize server weights as model average
server1.set_init_weights(workers.get_average_weights())
# start training
for epoch in range(epochs):
print("Epoch: ", epoch)
new_grad = workers.perform_updates(epoch)
new_w = server1.aggregate(new_grad)
workers.set_param(new_w)
###Output
Epoch: 0
training on worker: w1
sub-epoch: 0
###Markdown
Evaluate Model
###Code
# evaluate final model
eval_model = define_model()
eval_model.load_state_dict(new_w)
eval_model.eval()
eval_model = eval_model.cuda()
torch.save(eval_model.state_dict(), "models/experiment2_model.pt")
test_dataset = LFWDataset(data_path, test_df, transform=transforms.Compose([
# transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor()
]))
test_dataloader = DataLoader(test_dataset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
test_loss, test_acc = perform_evaluation(eval_model, test_dataloader)
###Output
_____no_output_____
|
tutorial/tutorial06_keywordsgenerator.ipynb
|
###Markdown
KeywordsGenerator class The KeywordGenerator class extracts relevant keywords in the text data **based on a tf-idf score computed on the training dataset**. The input dataframe KeywordGenerator **requires a *tokens* column** fow which iach elements is a list of strings. The *tokens* column can be generated with a Tokenizer object
###Code
import pandas as pd
import ast
df_emails_preprocessed = pd.read_csv('./data/emails_preprocessed.csv', encoding='utf-8', sep=';')
df_emails_preprocessed = df_emails_preprocessed[['tokens']]
df_emails_preprocessed['tokens'] = df_emails_preprocessed['tokens'].apply(lambda x: ast.literal_eval(x))
df_emails_preprocessed.tokens[0]
###Output
_____no_output_____
###Markdown
Arguments The specific parameters of the KeywordGenerator class are:- max_tfidf_features : size of vocabulary for tfidf- keywords : list of keyword to be extracted in priority (this list can be defined in the conf file)- stopwords : list of keywords to be ignored (this list can be defined in the conf file)- resample : when DataFrame contains a ‘label’ column, balance the dataset by resampling- n_max_keywords : maximum number of keywords to be returned for each email- n_min_keywords : minimum number of keywords to be returned for each email- threshold_keywords : minimum tf-idf score for a word to be selected as keyword
###Code
keywords = ['devis', 'contrat', 'resilitation']
stopwords = ["au", "aux", "avec", "ce", "ces", "dans", "de", "des", "du",
"elle", "en", "et", "eux", "il", "je", "la", "le", "leur", "lui", "ma",
"mais", "me", "même", "mes", "moi", "mon", "ne", "nos", "notre", "nous",
"on", "ou","par", "pas", "pour", "qu", "que", "qui", "sa", "se", "ses",
"son", "sur","ta", "te", "tes", "toi", "ton", "tu", "un", "une", "vos",
"votre", "vous", "c", "d", "j", "l", "à", "m", "n", "s", "t", "y", "été",
"étée", "étées", "étés", "étant", "étante", "étants", "étantes", "suis",
"es", "est", "sommes", "êtes", "sont", "serai", "seras", "sera", "serons",
"serez", "seront", "serais", "serait", "serions", "seriez", "seraient",
"étais", "était", "étions", "étiez", "étaient", "fus", "fut", "fûmes",
"fûtes", "furent", "sois", "soit", "soyons", "soyez", "soient", "fusse",
"fusses", "fût", "fussions", "fussiez", "fussent", "ayant", "ayante",
"ayantes", "ayants", "eu", "eue", "eues", "eus", "ai", "as", "avons",
"avez", "ont", "aurai", "auras", "aura", "aurons", "aurez", "auront",
"aurais", "aurait", "aurions", "auriez", "auraient", "avais", "avait",
"avions", "aviez", "avaient", "eut", "eûmes", "eûtes", "eurent", "aie",
"aies", "ait", "ayons", "ayez", "aient", "eusse", "eusses", "eût",
"eussions", "eussiez", "eussent", "suivant"],
###Output
_____no_output_____
###Markdown
Defining the KeywordsGenerator
###Code
from melusine.summarizer.keywords_generator import KeywordsGenerator
keywords_generator = KeywordsGenerator(keywords = keywords,
stopwords = stopwords,
n_max_keywords=5,
n_min_keywords=0,
threshold_keywords=0.1,
keywords_coef=10)
###Output
_____no_output_____
###Markdown
Training the KeywordsGenerator
###Code
keywords_generator.fit(df_emails_preprocessed)
###Output
_____no_output_____
###Markdown
Extracting keywords
###Code
df_emails_preprocessed = keywords_generator.transform(df_emails_preprocessed)
df_emails_preprocessed.head()
df_emails_preprocessed.tokens[1]
df_emails_preprocessed.keywords[1]
###Output
_____no_output_____
###Markdown
KeywordsGenerator class The KeywordGenerator class extracts relevant keywords in the text data **based on a tf-idf score computed on the training dataset**. The input dataframe KeywordGenerator **requires a *tokens* column** fow which iach elements is a list of strings. The *tokens* column can be generated with a Tokenizer object
###Code
import pandas as pd
import ast
df_emails_preprocessed = pd.read_csv('./data/emails_preprocessed.csv', encoding='utf-8', sep=';')
df_emails_preprocessed = df_emails_preprocessed[['tokens']]
df_emails_preprocessed['tokens'] = df_emails_preprocessed['tokens'].apply(lambda x: ast.literal_eval(x))
df_emails_preprocessed.tokens[0]
###Output
_____no_output_____
###Markdown
Arguments The specific parameters of the KeywordGenerator class are:- max_tfidf_features : size of vocabulary for tfidf- keywords : list of keyword to be extracted in priority (this list can be defined in the conf file)- stopwords : list of keywords to be ignored (this list can be defined in the conf file)- resample : when DataFrame contains a ‘label’ column, balance the dataset by resampling- n_max_keywords : maximum number of keywords to be returned for each email- n_min_keywords : minimum number of keywords to be returned for each email- threshold_keywords : minimum tf-idf score for a word to be selected as keyword
###Code
keywords = ['devis', 'contrat', 'resilitation']
stopwords = ["au", "aux", "avec", "ce", "ces", "dans", "de", "des", "du",
"elle", "en", "et", "eux", "il", "je", "la", "le", "leur", "lui", "ma",
"mais", "me", "même", "mes", "moi", "mon", "ne", "nos", "notre", "nous",
"on", "ou","par", "pas", "pour", "qu", "que", "qui", "sa", "se", "ses",
"son", "sur","ta", "te", "tes", "toi", "ton", "tu", "un", "une", "vos",
"votre", "vous", "c", "d", "j", "l", "à", "m", "n", "s", "t", "y", "été",
"étée", "étées", "étés", "étant", "étante", "étants", "étantes", "suis",
"es", "est", "sommes", "êtes", "sont", "serai", "seras", "sera", "serons",
"serez", "seront", "serais", "serait", "serions", "seriez", "seraient",
"étais", "était", "étions", "étiez", "étaient", "fus", "fut", "fûmes",
"fûtes", "furent", "sois", "soit", "soyons", "soyez", "soient", "fusse",
"fusses", "fût", "fussions", "fussiez", "fussent", "ayant", "ayante",
"ayantes", "ayants", "eu", "eue", "eues", "eus", "ai", "as", "avons",
"avez", "ont", "aurai", "auras", "aura", "aurons", "aurez", "auront",
"aurais", "aurait", "aurions", "auriez", "auraient", "avais", "avait",
"avions", "aviez", "avaient", "eut", "eûmes", "eûtes", "eurent", "aie",
"aies", "ait", "ayons", "ayez", "aient", "eusse", "eusses", "eût",
"eussions", "eussiez", "eussent", "suivant"],
###Output
_____no_output_____
###Markdown
Defining the KeywordsGenerator
###Code
from melusine.summarizer.keywords_generator import KeywordsGenerator
keywords_generator = KeywordsGenerator(keywords = keywords,
stopwords = stopwords,
n_max_keywords=5,
n_min_keywords=0,
threshold_keywords=0.1,
keywords_coef=10)
###Output
_____no_output_____
###Markdown
Training the KeywordsGenerator
###Code
keywords_generator.fit(df_emails_preprocessed)
###Output
_____no_output_____
###Markdown
Extracting keywords
###Code
df_emails_preprocessed = keywords_generator.transform(df_emails_preprocessed)
df_emails_preprocessed.head()
df_emails_preprocessed.tokens[1]
df_emails_preprocessed.keywords[1]
###Output
_____no_output_____
###Markdown
KeywordsGenerator class The KeywordGenerator class extracts relevant keywords in the text data **based on a tf-idf score computed on the training dataset**. The input dataframe KeywordGenerator **requires a *tokens* column** fow which iach elements is a list of strings. The *tokens* column can be generated with a Tokenizer object
###Code
import pandas as pd
import ast
df_emails_preprocessed = pd.read_csv('./data/emails_preprocessed.csv', encoding='utf-8', sep=';')
df_emails_preprocessed = df_emails_preprocessed[['tokens']]
df_emails_preprocessed['tokens'] = df_emails_preprocessed['tokens'].apply(lambda x: ast.literal_eval(x))
df_emails_preprocessed.tokens[0]
###Output
_____no_output_____
###Markdown
Arguments The specific parameters of the KeywordGenerator class are:- max_tfidf_features : size of vocabulary for tfidf- keywords : list of keyword to be extracted in priority (this list can be defined in the conf file)- stopwords : list of keywords to be ignored (this list can be defined in the conf file)- resample : when DataFrame contains a ‘label’ column, balance the dataset by resampling- n_max_keywords : maximum number of keywords to be returned for each email- n_min_keywords : minimum number of keywords to be returned for each email- threshold_keywords : minimum tf-idf score for a word to be selected as keyword
###Code
keywords = ['devis', 'contrat', 'resilitation']
stopwords = ["au", "aux", "avec", "ce", "ces", "dans", "de", "des", "du",
"elle", "en", "et", "eux", "il", "je", "la", "le", "leur", "lui", "ma",
"mais", "me", "même", "mes", "moi", "mon", "ne", "nos", "notre", "nous",
"on", "ou","par", "pas", "pour", "qu", "que", "qui", "sa", "se", "ses",
"son", "sur","ta", "te", "tes", "toi", "ton", "tu", "un", "une", "vos",
"votre", "vous", "c", "d", "j", "l", "à", "m", "n", "s", "t", "y", "été",
"étée", "étées", "étés", "étant", "étante", "étants", "étantes", "suis",
"es", "est", "sommes", "êtes", "sont", "serai", "seras", "sera", "serons",
"serez", "seront", "serais", "serait", "serions", "seriez", "seraient",
"étais", "était", "étions", "étiez", "étaient", "fus", "fut", "fûmes",
"fûtes", "furent", "sois", "soit", "soyons", "soyez", "soient", "fusse",
"fusses", "fût", "fussions", "fussiez", "fussent", "ayant", "ayante",
"ayantes", "ayants", "eu", "eue", "eues", "eus", "ai", "as", "avons",
"avez", "ont", "aurai", "auras", "aura", "aurons", "aurez", "auront",
"aurais", "aurait", "aurions", "auriez", "auraient", "avais", "avait",
"avions", "aviez", "avaient", "eut", "eûmes", "eûtes", "eurent", "aie",
"aies", "ait", "ayons", "ayez", "aient", "eusse", "eusses", "eût",
"eussions", "eussiez", "eussent", "suivant"],
###Output
_____no_output_____
###Markdown
Defining the KeywordsGenerator
###Code
from melusine.summarizer.keywords_generator import KeywordsGenerator
keywords_generator = KeywordsGenerator(keywords = keywords,
stopwords = stopwords,
n_max_keywords=5,
n_min_keywords=0,
threshold_keywords=0.1,
keywords_coef=10)
###Output
_____no_output_____
###Markdown
Training the KeywordsGenerator
###Code
keywords_generator.fit(df_emails_preprocessed)
###Output
/Users/florianarthur/opt/anaconda3/envs/melusine_new/lib/python3.8/site-packages/sklearn/base.py:209: FutureWarning: From version 0.24, get_params will raise an AttributeError if a parameter cannot be retrieved as an instance attribute. Previously it would return None.
warnings.warn('From version 0.24, get_params will raise an '
###Markdown
Extracting keywords
###Code
df_emails_preprocessed = keywords_generator.transform(df_emails_preprocessed)
df_emails_preprocessed.head()
df_emails_preprocessed.tokens[1]
df_emails_preprocessed.keywords[1]
###Output
_____no_output_____
###Markdown
KeywordsGenerator class The KeywordGenerator class extracts relevant keywords in the text data **based on a tf-idf score computed on the training dataset**. Load data KeywordGenerator **requires a *tokens* column** fow which each elements is a list of strings. (The *tokens* column can be generated with a Tokenizer object)
###Code
from melusine.data.data_loader import load_email_data
df_emails = load_email_data(type="preprocessed")
df_emails.tokens[0]
###Output
_____no_output_____
###Markdown
Arguments The specific parameters of the KeywordGenerator class are:- max_tfidf_features : size of vocabulary for tfidf- keywords : list of keyword to be extracted in priority (this list can be defined in the conf file)- stopwords : list of keywords to be ignored (this list can be defined in the conf file)- resample : when DataFrame contains a ‘label’ column, balance the dataset by resampling- n_max_keywords : maximum number of keywords to be returned for each email- n_min_keywords : minimum number of keywords to be returned for each email- threshold_keywords : minimum tf-idf score for a word to be selected as keyword
###Code
keywords = ['devis', 'contrat', 'resilitation']
stopwords = ["au", "aux", "avec", "ce", "ces", "dans", "de", "des", "du",
"elle", "en", "et", "eux", "il", "je", "la", "le", "leur", "lui", "ma",
"mais", "me", "même", "mes", "moi", "mon", "ne", "nos", "notre", "nous",
"on", "ou","par", "pas", "pour", "qu", "que", "qui", "sa", "se", "ses",
"son", "sur","ta", "te", "tes", "toi", "ton", "tu", "un", "une", "vos",
"votre", "vous", "c", "d", "j", "l", "à", "m", "n", "s", "t", "y", "été",
"étée", "étées", "étés", "étant", "étante", "étants", "étantes", "suis",
"es", "est", "sommes", "êtes", "sont", "serai", "seras", "sera", "serons",
"serez", "seront", "serais", "serait", "serions", "seriez", "seraient",
"étais", "était", "étions", "étiez", "étaient", "fus", "fut", "fûmes",
"fûtes", "furent", "sois", "soit", "soyons", "soyez", "soient", "fusse",
"fusses", "fût", "fussions", "fussiez", "fussent", "ayant", "ayante",
"ayantes", "ayants", "eu", "eue", "eues", "eus", "ai", "as", "avons",
"avez", "ont", "aurai", "auras", "aura", "aurons", "aurez", "auront",
"aurais", "aurait", "aurions", "auriez", "auraient", "avais", "avait",
"avions", "aviez", "avaient", "eut", "eûmes", "eûtes", "eurent", "aie",
"aies", "ait", "ayons", "ayez", "aient", "eusse", "eusses", "eût",
"eussions", "eussiez", "eussent", "suivant"],
###Output
_____no_output_____
###Markdown
Defining the KeywordsGenerator
###Code
from melusine.summarizer.keywords_generator import KeywordsGenerator
keywords_generator = KeywordsGenerator(keywords = keywords,
stopwords = stopwords,
n_max_keywords=5,
n_min_keywords=0,
threshold_keywords=0.1,
keywords_coef=10)
###Output
_____no_output_____
###Markdown
Training the KeywordsGenerator
###Code
keywords_generator.fit(df_emails)
###Output
/Users/hperrier/.conda/envs/melusine_perso/lib/python3.7/site-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function get_feature_names is deprecated; get_feature_names is deprecated in 1.0 and will be removed in 1.2. Please use get_feature_names_out instead.
warnings.warn(msg, category=FutureWarning)
###Markdown
Extracting keywords
###Code
df_emails_preprocessed = keywords_generator.transform(df_emails)
df_emails_preprocessed.head()
df_emails_preprocessed.tokens[1]
df_emails_preprocessed.keywords[1]
###Output
_____no_output_____
###Markdown
KeywordsGenerator class The KeywordGenerator class extracts relevant keywords in the text data **based on a tf-idf score computed on the training dataset**. The input dataframe KeywordGenerator **requires a *tokens* column** fow which iach elements is a list of strings. The *tokens* column can be generated with a Tokenizer object
###Code
import pandas as pd
import ast
df_emails_preprocessed = pd.read_csv('./data/emails_preprocessed.csv', encoding='utf-8', sep=';')
df_emails_preprocessed = df_emails_preprocessed[['tokens']]
df_emails_preprocessed['tokens'] = df_emails_preprocessed['tokens'].apply(lambda x: ast.literal_eval(x))
df_emails_preprocessed.tokens[0]
###Output
_____no_output_____
###Markdown
Arguments The specific parameters of the KeywordGenerator class are:- max_tfidf_features : size of vocabulary for tfidf- keywords : list of keyword to be extracted in priority (this list can be defined in the conf file)- stopwords : list of keywords to be ignored (this list can be defined in the conf file)- resample : when DataFrame contains a ‘label’ column, balance the dataset by resampling- n_max_keywords : maximum number of keywords to be returned for each email- n_min_keywords : minimum number of keywords to be returned for each email- threshold_keywords : minimum tf-idf score for a word to be selected as keyword
###Code
keywords = ['devis', 'contrat', 'resilitation']
stopwords = ["au", "aux", "avec", "ce", "ces", "dans", "de", "des", "du",
"elle", "en", "et", "eux", "il", "je", "la", "le", "leur", "lui", "ma",
"mais", "me", "même", "mes", "moi", "mon", "ne", "nos", "notre", "nous",
"on", "ou","par", "pas", "pour", "qu", "que", "qui", "sa", "se", "ses",
"son", "sur","ta", "te", "tes", "toi", "ton", "tu", "un", "une", "vos",
"votre", "vous", "c", "d", "j", "l", "à", "m", "n", "s", "t", "y", "été",
"étée", "étées", "étés", "étant", "étante", "étants", "étantes", "suis",
"es", "est", "sommes", "êtes", "sont", "serai", "seras", "sera", "serons",
"serez", "seront", "serais", "serait", "serions", "seriez", "seraient",
"étais", "était", "étions", "étiez", "étaient", "fus", "fut", "fûmes",
"fûtes", "furent", "sois", "soit", "soyons", "soyez", "soient", "fusse",
"fusses", "fût", "fussions", "fussiez", "fussent", "ayant", "ayante",
"ayantes", "ayants", "eu", "eue", "eues", "eus", "ai", "as", "avons",
"avez", "ont", "aurai", "auras", "aura", "aurons", "aurez", "auront",
"aurais", "aurait", "aurions", "auriez", "auraient", "avais", "avait",
"avions", "aviez", "avaient", "eut", "eûmes", "eûtes", "eurent", "aie",
"aies", "ait", "ayons", "ayez", "aient", "eusse", "eusses", "eût",
"eussions", "eussiez", "eussent", "suivant"],
###Output
_____no_output_____
###Markdown
Defining the KeywordsGenerator
###Code
from melusine.summarizer.keywords_generator import KeywordsGenerator
keywords_generator = KeywordsGenerator(keywords = keywords,
stopwords = stopwords,
n_max_keywords=5,
n_min_keywords=0,
threshold_keywords=0.1,
keywords_coef=10)
###Output
_____no_output_____
###Markdown
Training the KeywordsGenerator
###Code
keywords_generator.fit(df_emails_preprocessed)
###Output
_____no_output_____
###Markdown
Extracting keywords
###Code
df_emails_preprocessed = keywords_generator.transform(df_emails_preprocessed)
df_emails_preprocessed.head()
df_emails_preprocessed.tokens[1]
df_emails_preprocessed.keywords[1]
###Output
_____no_output_____
|
notebooks/paraphraser_bible.ipynb
|
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/delkind/paraphraser/blob/master/notebooks/paraphraser_bible.ipynb)
###Code
%load_ext autoreload
%autoreload 2
# colab requirements
#!pip install spacy #only for bible
!pip install pydrive #to save to google-drive
!pip install num2words #only for numbers
import tensorflow as tf
import sys
import numpy as np
# our github proj!
!rm -r paraphraser #remove previous github copy if needed
!git clone https://github.com/delkind/paraphraser.git
sys.path.append('paraphraser/src')
#usage example
from utils.persistency import Persistency
from models import D_G_Model
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
Requirement already satisfied: pydrive in /usr/local/lib/python3.6/dist-packages (1.3.1)
Requirement already satisfied: google-api-python-client>=1.2 in /usr/local/lib/python3.6/dist-packages (from pydrive) (1.6.7)
Requirement already satisfied: oauth2client>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pydrive) (4.1.3)
Requirement already satisfied: PyYAML>=3.0 in /usr/local/lib/python3.6/dist-packages (from pydrive) (3.13)
Requirement already satisfied: six<2dev,>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.2->pydrive) (1.11.0)
Requirement already satisfied: httplib2<1dev,>=0.9.2 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.2->pydrive) (0.11.3)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.2->pydrive) (3.0.0)
Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (0.2.2)
Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (4.0)
Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (0.4.4)
Requirement already satisfied: num2words in /usr/local/lib/python3.6/dist-packages (0.5.7)
###Markdown
Dataset
###Code
from dataset.bible import BibleDataset
dataset = BibleDataset(["asv", "ylt"], "https://raw.githubusercontent.com/scrollmapper/bible_databases/master/csv/t_",'.csv')
# example for datasets generators
def show_dataset_example():
(x1,x2),y1=next(dataset.gen_g(dataset.train, batch_size=3))
for b in range(len(x1)):
print ('x1',x1.shape,dataset.recostruct_sentence(x1[b]))
print ('x2',x2.shape,dataset.recostruct_sentence(x2[b]))
print ('y1',y1.shape,dataset.recostruct_sentence(y1[b].argmax(axis=1)))
print ('')
print ('results of gen_adv')
(x1, x2), (y1,y2) = next(dataset.gen_adv(dataset.train, batch_size=3, noise_std=0.5))
for b in range(len(x1)):
print ('x1',x1.shape,dataset.recostruct_sentence(x1[b]))
print ('x2',x2.shape,dataset.recostruct_sentence(x2[b]))
print ('y1',y1.shape,dataset.recostruct_sentence(y1[b].argmax(axis=1)))
print ('y2',y2[b])
show_dataset_example() #unmark it to understand how the data looks
from models import D_G_Model,D_G_Trainer
from decoder import SamplingDecoder
model = D_G_Model(num_encoder_tokens=len(dataset.word2index),
num_decoder_tokens=len(dataset.word2index), #from dataset 3628
style_out_size=len(dataset.style2index), #from dataset 2
cuddlstm=True,
latent_dim = 50, #twice the default. make it stronger! but slower
bidi_encoder = True,
adv_loss_weight=100,) #500
model.build_all()
sampler= SamplingDecoder(model)
trainer = D_G_Trainer(model,dataset)
train_size = len(dataset.index2style) * (dataset.train[1] - dataset.train[0])
batch_size=64
epoc = int(train_size/batch_size)
print ('epoc is of',epoc,'of batches',batch_size,'total train_size',train_size)
###Output
unoptimzied decode_sequence_batch, running each of the N sample seperatly
unoptimzied decode_sequence_batch, running each of the N sample seperatly
epoc is of 756 of batches 64 total train_size 48432
###Markdown
cycle appraoch
###Code
for outside_epoc in range(200):
steps_d,steps_g=(20,20) #use 20 steps for verbosity
for inside_epoc in range(20):
#trainer.train_g(steps_d,batch_size=32,noise_std=0.0)
trainer.train_d(steps_d,batch_size=32,noise=0.00)
trainer.train_d_g(steps_g ,batch_size=32,noise=0.0,noise_std=1.0)
trainer.train_g_cycle(steps_g,batch_size=32,noise_std=10.0)
steps_d,steps_g= (100,100)
trainer.plt_all()
sampler.show_sample(dataset,'train' ,sample_ids=[10,25],teacher_forcing=True) #,8000+0
###Output
Epoch 1/1
20/20 [==============================] - 0s 15ms/step - loss: 0.6935 - acc: 0.4969 - val_loss: 0.7006 - val_acc: 0.4062
Epoch 1/1
20/20 [==============================] - 2s 108ms/step - loss: 2.7788 - dcd_sfmax_loss: 2.7304 - styl_clsf_loss: 4.8376e-04 - val_loss: 3.9134 - val_dcd_sfmax_loss: 3.9063 - val_styl_clsf_loss: 7.0550e-05
Epoch 1/1
20/20 [==============================] - 8s 383ms/step - loss: 2.8980 - dcd_sfmax_loss: 2.8662 - styl_clsf_loss: 3.1829e-04 - val_loss: 4.1053 - val_dcd_sfmax_loss: 4.0908 - val_styl_clsf_loss: 1.4523e-04
Epoch 1/1
100/100 [==============================] - 1s 14ms/step - loss: 0.6945 - acc: 0.4903 - val_loss: 0.6947 - val_acc: 0.5625
Epoch 1/1
100/100 [==============================] - 12s 122ms/step - loss: 2.8534 - dcd_sfmax_loss: 2.8273 - styl_clsf_loss: 2.6097e-04 - val_loss: 4.0516 - val_dcd_sfmax_loss: 4.0447 - val_styl_clsf_loss: 6.8851e-05
Epoch 1/1
100/100 [==============================] - 34s 338ms/step - loss: 3.0329 - dcd_sfmax_loss: 2.7943 - styl_clsf_loss: 0.0024 - val_loss: 4.1248 - val_dcd_sfmax_loss: 4.1233 - val_styl_clsf_loss: 1.4730e-05
Epoch 1/1
100/100 [==============================] - 2s 15ms/step - loss: 0.6908 - acc: 0.5328 - val_loss: 0.6846 - val_acc: 0.6250
Epoch 1/1
100/100 [==============================] - 11s 114ms/step - loss: 2.7842 - dcd_sfmax_loss: 2.7494 - styl_clsf_loss: 3.4829e-04 - val_loss: 4.2462 - val_dcd_sfmax_loss: 4.2145 - val_styl_clsf_loss: 3.1742e-04
Epoch 1/1
100/100 [==============================] - 35s 351ms/step - loss: 2.7593 - dcd_sfmax_loss: 2.7244 - styl_clsf_loss: 3.4924e-04 - val_loss: 3.9739 - val_dcd_sfmax_loss: 3.9729 - val_styl_clsf_loss: 1.0714e-05
Epoch 1/1
100/100 [==============================] - 2s 15ms/step - loss: 0.6922 - acc: 0.5141 - val_loss: 0.6995 - val_acc: 0.4062
Epoch 1/1
100/100 [==============================] - 10s 101ms/step - loss: 2.7353 - dcd_sfmax_loss: 2.7036 - styl_clsf_loss: 3.1658e-04 - val_loss: 4.0194 - val_dcd_sfmax_loss: 4.0059 - val_styl_clsf_loss: 1.3520e-04
Epoch 1/1
100/100 [==============================] - 35s 348ms/step - loss: 2.5958 - dcd_sfmax_loss: 2.5801 - styl_clsf_loss: 1.5642e-04 - val_loss: 4.3490 - val_dcd_sfmax_loss: 4.3480 - val_styl_clsf_loss: 1.0062e-05
Epoch 1/1
100/100 [==============================] - 3s 27ms/step - loss: 0.6918 - acc: 0.5259 - val_loss: 0.7062 - val_acc: 0.3438
Epoch 1/1
100/100 [==============================] - 13s 134ms/step - loss: 2.7232 - dcd_sfmax_loss: 2.7040 - styl_clsf_loss: 1.9191e-04 - val_loss: 4.1515 - val_dcd_sfmax_loss: 4.1501 - val_styl_clsf_loss: 1.3679e-05
Epoch 1/1
100/100 [==============================] - 49s 494ms/step - loss: 2.6067 - dcd_sfmax_loss: 2.5960 - styl_clsf_loss: 1.0737e-04 - val_loss: 4.0432 - val_dcd_sfmax_loss: 4.0430 - val_styl_clsf_loss: 1.9744e-06
Epoch 1/1
100/100 [==============================] - 3s 26ms/step - loss: 0.6922 - acc: 0.5225 - val_loss: 0.6899 - val_acc: 0.5938
Epoch 1/1
100/100 [==============================] - 9s 88ms/step - loss: 2.5424 - dcd_sfmax_loss: 2.5157 - styl_clsf_loss: 2.6697e-04 - val_loss: 4.1854 - val_dcd_sfmax_loss: 4.1809 - val_styl_clsf_loss: 4.4975e-05
Epoch 1/1
100/100 [==============================] - 48s 484ms/step - loss: 2.6054 - dcd_sfmax_loss: 2.5954 - styl_clsf_loss: 9.9976e-05 - val_loss: 3.9654 - val_dcd_sfmax_loss: 3.9653 - val_styl_clsf_loss: 1.4305e-06
Epoch 1/1
100/100 [==============================] - 3s 26ms/step - loss: 0.6918 - acc: 0.5181 - val_loss: 0.7034 - val_acc: 0.4375
Epoch 1/1
100/100 [==============================] - 7s 69ms/step - loss: 1.2079 - dcd_sfmax_loss: 1.1831 - styl_clsf_loss: 2.4720e-04 - val_loss: 4.5599 - val_dcd_sfmax_loss: 4.5584 - val_styl_clsf_loss: 1.5048e-05
Epoch 1/1
100/100 [==============================] - 50s 504ms/step - loss: 3.0247 - dcd_sfmax_loss: 2.7866 - styl_clsf_loss: 0.0024 - val_loss: 4.1690 - val_dcd_sfmax_loss: 4.1685 - val_styl_clsf_loss: 4.5188e-06
Epoch 1/1
100/100 [==============================] - 3s 33ms/step - loss: 0.6921 - acc: 0.5159 - val_loss: 0.6946 - val_acc: 0.5000
Epoch 1/1
100/100 [==============================] - 17s 173ms/step - loss: 2.8801 - dcd_sfmax_loss: 2.8675 - styl_clsf_loss: 1.2553e-04 - val_loss: 4.1806 - val_dcd_sfmax_loss: 4.1780 - val_styl_clsf_loss: 2.6327e-05
Epoch 1/1
100/100 [==============================] - 47s 466ms/step - loss: 2.5807 - dcd_sfmax_loss: 2.5723 - styl_clsf_loss: 8.3801e-05 - val_loss: 4.2222 - val_dcd_sfmax_loss: 4.2220 - val_styl_clsf_loss: 2.2557e-06
Epoch 1/1
100/100 [==============================] - 3s 29ms/step - loss: 0.6934 - acc: 0.5088 - val_loss: 0.6947 - val_acc: 0.5312
Epoch 1/1
100/100 [==============================] - 9s 92ms/step - loss: 2.4873 - dcd_sfmax_loss: 2.4596 - styl_clsf_loss: 2.7740e-04 - val_loss: 4.2270 - val_dcd_sfmax_loss: 4.2158 - val_styl_clsf_loss: 1.1156e-04
Epoch 1/1
100/100 [==============================] - 48s 485ms/step - loss: 2.6184 - dcd_sfmax_loss: 2.6093 - styl_clsf_loss: 9.0672e-05 - val_loss: 4.1558 - val_dcd_sfmax_loss: 4.1555 - val_styl_clsf_loss: 3.1237e-06
Epoch 1/1
100/100 [==============================] - 3s 29ms/step - loss: 0.6932 - acc: 0.5134 - val_loss: 0.6928 - val_acc: 0.4375
Epoch 1/1
100/100 [==============================] - 11s 112ms/step - loss: 2.5528 - dcd_sfmax_loss: 2.5409 - styl_clsf_loss: 1.1905e-04 - val_loss: 4.1093 - val_dcd_sfmax_loss: 4.1089 - val_styl_clsf_loss: 3.8557e-06
Epoch 1/1
100/100 [==============================] - 48s 476ms/step - loss: 2.5555 - dcd_sfmax_loss: 2.5471 - styl_clsf_loss: 8.4151e-05 - val_loss: 4.2873 - val_dcd_sfmax_loss: 4.2873 - val_styl_clsf_loss: 0.0000e+00
Epoch 1/1
100/100 [==============================] - 3s 27ms/step - loss: 0.6925 - acc: 0.5247 - val_loss: 0.7003 - val_acc: 0.4375
Epoch 1/1
100/100 [==============================] - 13s 130ms/step - loss: 2.7291 - dcd_sfmax_loss: 2.7058 - styl_clsf_loss: 2.3305e-04 - val_loss: 3.9591 - val_dcd_sfmax_loss: 3.9523 - val_styl_clsf_loss: 6.8123e-05
Epoch 1/1
100/100 [==============================] - 48s 477ms/step - loss: 2.5074 - dcd_sfmax_loss: 2.4994 - styl_clsf_loss: 7.9588e-05 - val_loss: 4.4571 - val_dcd_sfmax_loss: 4.4570 - val_styl_clsf_loss: 1.7881e-07
Epoch 1/1
100/100 [==============================] - 3s 31ms/step - loss: 0.6934 - acc: 0.4941 - val_loss: 0.6833 - val_acc: 0.5938
Epoch 1/1
100/100 [==============================] - 9s 91ms/step - loss: 2.4340 - dcd_sfmax_loss: 2.4173 - styl_clsf_loss: 1.6719e-04 - val_loss: 4.1519 - val_dcd_sfmax_loss: 4.1486 - val_styl_clsf_loss: 3.3556e-05
Epoch 1/1
100/100 [==============================] - 54s 541ms/step - loss: 2.8600 - dcd_sfmax_loss: 2.8445 - styl_clsf_loss: 1.5535e-04 - val_loss: 4.2851 - val_dcd_sfmax_loss: 4.2843 - val_styl_clsf_loss: 7.5400e-06
Epoch 1/1
100/100 [==============================] - 3s 35ms/step - loss: 0.6935 - acc: 0.4984 - val_loss: 0.6928 - val_acc: 0.5312
Epoch 1/1
100/100 [==============================] - 11s 111ms/step - loss: 2.5602 - dcd_sfmax_loss: 2.5497 - styl_clsf_loss: 1.0496e-04 - val_loss: 5.2617 - val_dcd_sfmax_loss: 5.2200 - val_styl_clsf_loss: 4.1679e-04
Epoch 1/1
100/100 [==============================] - 51s 510ms/step - loss: 2.5582 - dcd_sfmax_loss: 2.5517 - styl_clsf_loss: 6.4746e-05 - val_loss: 3.9653 - val_dcd_sfmax_loss: 3.9652 - val_styl_clsf_loss: 1.0133e-06
Epoch 1/1
100/100 [==============================] - 3s 27ms/step - loss: 0.6929 - acc: 0.5000 - val_loss: 0.6906 - val_acc: 0.5312
Epoch 1/1
100/100 [==============================] - 12s 118ms/step - loss: 2.4910 - dcd_sfmax_loss: 2.4809 - styl_clsf_loss: 1.0162e-04 - val_loss: 4.1349 - val_dcd_sfmax_loss: 4.1330 - val_styl_clsf_loss: 1.8889e-05
Epoch 1/1
100/100 [==============================] - 52s 524ms/step - loss: 2.7096 - dcd_sfmax_loss: 2.7005 - styl_clsf_loss: 9.0455e-05 - val_loss: 4.3503 - val_dcd_sfmax_loss: 4.3502 - val_styl_clsf_loss: 5.9605e-07
Epoch 1/1
100/100 [==============================] - 3s 29ms/step - loss: 0.6899 - acc: 0.5413 - val_loss: 0.6896 - val_acc: 0.5000
Epoch 1/1
100/100 [==============================] - 7s 69ms/step - loss: 1.1164 - dcd_sfmax_loss: 1.0795 - styl_clsf_loss: 3.6915e-04 - val_loss: 4.6433 - val_dcd_sfmax_loss: 4.6377 - val_styl_clsf_loss: 5.6926e-05
Epoch 1/1
100/100 [==============================] - 49s 495ms/step - loss: 2.5909 - dcd_sfmax_loss: 2.5794 - styl_clsf_loss: 1.1528e-04 - val_loss: 4.2227 - val_dcd_sfmax_loss: 4.2226 - val_styl_clsf_loss: 5.3644e-07
Epoch 1/1
100/100 [==============================] - 3s 31ms/step - loss: 0.6923 - acc: 0.5134 - val_loss: 0.6870 - val_acc: 0.5938
Epoch 1/1
100/100 [==============================] - 11s 114ms/step - loss: 2.5350 - dcd_sfmax_loss: 2.5188 - styl_clsf_loss: 1.6196e-04 - val_loss: 4.1320 - val_dcd_sfmax_loss: 4.1272 - val_styl_clsf_loss: 4.7522e-05
Epoch 1/1
100/100 [==============================] - 55s 550ms/step - loss: 2.8193 - dcd_sfmax_loss: 2.8050 - styl_clsf_loss: 1.4275e-04 - val_loss: 4.1705 - val_dcd_sfmax_loss: 4.1667 - val_styl_clsf_loss: 3.8218e-05
Epoch 1/1
100/100 [==============================] - 3s 26ms/step - loss: 0.6923 - acc: 0.5147 - val_loss: 0.6947 - val_acc: 0.5312
Epoch 1/1
100/100 [==============================] - 13s 129ms/step - loss: 2.6965 - dcd_sfmax_loss: 2.6804 - styl_clsf_loss: 1.6062e-04 - val_loss: 4.4465 - val_dcd_sfmax_loss: 4.4369 - val_styl_clsf_loss: 9.5751e-05
Epoch 1/1
100/100 [==============================] - 51s 508ms/step - loss: 2.5045 - dcd_sfmax_loss: 2.4931 - styl_clsf_loss: 1.1446e-04 - val_loss: 4.2590 - val_dcd_sfmax_loss: 4.2577 - val_styl_clsf_loss: 1.2651e-05
Epoch 1/1
100/100 [==============================] - 4s 37ms/step - loss: 0.6890 - acc: 0.5400 - val_loss: 0.6791 - val_acc: 0.5938
Epoch 1/1
100/100 [==============================] - 11s 111ms/step - loss: 2.4911 - dcd_sfmax_loss: 2.4603 - styl_clsf_loss: 3.0833e-04 - val_loss: 4.2404 - val_dcd_sfmax_loss: 4.2398 - val_styl_clsf_loss: 6.0573e-06
Epoch 1/1
100/100 [==============================] - 55s 554ms/step - loss: 2.8493 - dcd_sfmax_loss: 2.7938 - styl_clsf_loss: 5.5537e-04 - val_loss: 4.2148 - val_dcd_sfmax_loss: 4.2118 - val_styl_clsf_loss: 3.0622e-05
Epoch 1/1
100/100 [==============================] - 4s 35ms/step - loss: 0.6930 - acc: 0.5172 - val_loss: 0.7042 - val_acc: 0.2812
Epoch 1/1
100/100 [==============================] - 9s 88ms/step - loss: 2.5443 - dcd_sfmax_loss: 2.5275 - styl_clsf_loss: 1.6765e-04 - val_loss: 4.3811 - val_dcd_sfmax_loss: 4.3730 - val_styl_clsf_loss: 8.0906e-05
Epoch 1/1
100/100 [==============================] - 49s 491ms/step - loss: 2.5142 - dcd_sfmax_loss: 2.5068 - styl_clsf_loss: 7.4186e-05 - val_loss: 4.2068 - val_dcd_sfmax_loss: 4.2068 - val_styl_clsf_loss: -2.6077e-08
Epoch 1/1
100/100 [==============================] - 3s 25ms/step - loss: 0.6923 - acc: 0.5081 - val_loss: 0.7025 - val_acc: 0.4375
Epoch 1/1
100/100 [==============================] - 9s 93ms/step - loss: 2.4914 - dcd_sfmax_loss: 2.4673 - styl_clsf_loss: 2.4027e-04 - val_loss: 4.2608 - val_dcd_sfmax_loss: 4.2577 - val_styl_clsf_loss: 3.1438e-05
Epoch 1/1
100/100 [==============================] - 47s 469ms/step - loss: 2.4148 - dcd_sfmax_loss: 2.4068 - styl_clsf_loss: 7.9884e-05 - val_loss: 4.3176 - val_dcd_sfmax_loss: 4.3176 - val_styl_clsf_loss: 5.9605e-07
|
sklearn/Banknote Authentication.ipynb
|
###Markdown
Load data
###Code
import pandas as pd
import numpy as np
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
# Load data
np_data = pd.read_csv("../data/banknote.csv").values
# Split data into X and y
X_raw = np_data[:,0:-1].astype(float)
y_raw = np_data[:,-1]
# Shuffle data
X, y = shuffle(X_raw, y_raw, random_state=0)
# Normalize data to avoid high input values
#scaler = StandardScaler()
#scaler.fit(X_raw)
#X = scaler.transform(X_raw)
# Print some stuff
print("Example:")
print(X[0], "->", y[0])
print("")
print("Data shape:", X.shape)
###Output
Example:
[ -1.7713 -10.7665 10.2184 -1.0043] -> 1.0
Data shape: (1372, 4)
###Markdown
Function for evaluating model accuracy
###Code
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
def evaluate_test(model):
print("\n-- Test set --")
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=111, stratify=y)
# train model on training dataset
model.fit(X_train, y_train)
# evaluate dataset
y_pred = model.predict(X_test)
# calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
# confusion matrix
print("Confusion Matrix:")
conf_mx = confusion_matrix(y_test, y_pred)
print(conf_mx)
def evaluate_cv(model):
print("\n-- 5-fold CV --")
# 10-fold CV
y_pred = cross_val_predict(model, X, y, cv=5)
# calculate accuracy
accuracy = accuracy_score(y, y_pred)
print("Average accuracy: %.2f%%" % (accuracy * 100.0))
# confusion matrix
print("Confusion Matrix:")
conf_mx = confusion_matrix(y, y_pred)
print(conf_mx)
###Output
_____no_output_____
###Markdown
Naive Bayes
###Code
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
evaluate_test(model)
evaluate_cv(model)
###Output
-- Test set --
Accuracy: 85.45%
Confusion Matrix:
[[137 16]
[ 24 98]]
-- 5-fold CV --
Average accuracy: 83.82%
Confusion Matrix:
[[668 94]
[128 482]]
|
notebooks/SentimentAnalysisNetwork.ipynb
|
###Markdown
Analyze text sentiment:The machine learning approachThis project is based on Andrew Trask [Sentiment project](https://github.com/udacity/deep-learning/tree/master/sentiment-network).The dataset is part of the [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/) publication.
###Code
from collections import Counter
import os
import math
from random import randint
import sys
import time
from IPython.display import Image
import numpy as np
from lib.reviews.load_reviews import load_reviews
from lib.reviews.get_words_indexes import get_words_indexes
from lib.activation_functions.sigmoid import sigmoid
from lib.derivatives.sigmoid_derivative import sigmoid_derivative
###Output
_____no_output_____
###Markdown
Load the reviews and labels data
###Code
POSITIVE_DATASET_PATH = "dataset/positive_reviews.txt"
positive_reviews = load_reviews(POSITIVE_DATASET_PATH)
positive_reviews[0]
NEGATIVE_DATASET_PATH = "dataset/negative_reviews.txt"
negative_reviews = load_reviews(NEGATIVE_DATASET_PATH)
negative_reviews[0]
###Output
_____no_output_____
###Markdown
Create the words counters We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
###Code
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
def get_words_count(reviews):
words_counts = Counter()
for index in range(len(reviews)):
words = reviews[index].split(' ')
for word in words:
words_counts[word] += 1
return words_counts
positive_counts = get_words_count(positive_reviews)
negative_counts = get_words_count(negative_reviews)
total_counts = positive_counts + negative_counts
###Output
_____no_output_____
###Markdown
Examine the most common words in positive reviews
###Code
positive_counts.most_common()
###Output
_____no_output_____
###Markdown
And the respective most common words in negative reviews
###Code
negative_counts.most_common()
###Output
_____no_output_____
###Markdown
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
###Code
pos_neg_ratios = Counter()
for word in positive_counts:
if(positive_counts[word] > 100 or negative_counts[word] > 100):
pos_neg_ratios[word] = math.log(positive_counts[word] / (negative_counts[word] + 1))
###Output
_____no_output_____
###Markdown
Examine the calculated ratios for a few words:
###Code
print(positive_counts["the"])
print(negative_counts["the"])
print(pos_neg_ratios["the"])
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
_____no_output_____
###Markdown
Neutral word have a ratio value close to 0. Words expected to see more often in positive reviews – like "amazing" – have a ratio greater than 0. Words with a ratio lower than 0 were expected to be more often in negative reviews.Extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs. Build the neural network Assign a seed to our random number generator to ensure we get reproducable results during development.
###Code
np.random.seed(1)
###Output
_____no_output_____
###Markdown
Define the hyperparameters
###Code
# The network learning rate.
learning_rate = 0.001
# The polarity cutoff to exclude values very close to 0.
POLARITY_CUTOFF = 0.02
# The early stopping value expressed in percentage for the validation
EARLY_STOPPING_VALUE = 80
# The number of single pass through whole training dataset
EPOCHS = 3
###Output
_____no_output_____
###Markdown
Create the words indexes dictionary processing the positive and negative reviews and keeping only the words with a ratio greater than the polarity cutoff.
###Code
word_index = 0
words_indexes_dictionary = {}
for word in pos_neg_ratios:
if(abs(pos_neg_ratios[word]) > POLARITY_CUTOFF):
words_indexes_dictionary[word] = word_index
word_index += 1
###Output
_____no_output_____
###Markdown
Define the data sets for training and testing the neural network.
###Code
NEGATIVE = 0
POSITIVE = 1
reviews = []
labels = []
# Insert positive reviews
reviews = positive_reviews[:]
labels = [POSITIVE] * len(reviews)
# Insert randomly negative reviews
for review_index in range(len(negative_reviews)):
index = randint(0, len(reviews))
reviews.insert(index, negative_reviews[review_index])
labels.insert(index, NEGATIVE)
train_reviews = reviews[:16000]
valid_reviews = reviews[16000:17000]
test_reviews = reviews[-5000:]
train_labels = labels[:16000]
valid_labels = labels[16000:17000]
test_labels = labels[-5000:]
###Output
_____no_output_____
###Markdown
Build the neural network structure having only an hidden layer.
###Code
INPUT_LAYER_NODES = len(words_indexes_dictionary)
HIDDEN_LAYER_NODES = 10
OUTPUT_LAYER_NODES = 1
input_to_hidden_weights = np.zeros((INPUT_LAYER_NODES, HIDDEN_LAYER_NODES))
hidden_to_output_weights = np.random.normal(0.0, HIDDEN_LAYER_NODES ** -0.5,
(HIDDEN_LAYER_NODES, OUTPUT_LAYER_NODES))
hidden_layer = np.zeros((1, HIDDEN_LAYER_NODES))
###Output
_____no_output_____
###Markdown
Train the neural network Loop through all the given reviews and run a forward and backward pass, updating weights for every item.
###Code
for epoch in range(EPOCHS):
correct_predictions = 0
for review_index in range(len(train_reviews)):
review = train_reviews[review_index]
label = train_labels[review_index]
# Prepare the list of unique word indexes found on current review
words_indexes = get_words_indexes(words_indexes_dictionary, review)
## The forward pass through the network
# Calculate the hidden layer values with the input to hidden weights
hidden_layer = np.zeros((OUTPUT_LAYER_NODES, HIDDEN_LAYER_NODES))
for word_index in words_indexes:
hidden_layer += input_to_hidden_weights[word_index]
# Calculate the output value multiplying the hidden layer values by the hidden to output weights
output = hidden_layer.dot(hidden_to_output_weights)
output = sigmoid(output)
## The network validation
valid_correct_predictions = 0
for valid_index in range(len(valid_reviews)):
valid_review = valid_reviews[valid_index]
valid_label = valid_labels[valid_index]
words_indexes = get_words_indexes(words_indexes_dictionary, valid_review)
hidden_layer = np.zeros((OUTPUT_LAYER_NODES, HIDDEN_LAYER_NODES))
for word_index in words_indexes:
hidden_layer += input_to_hidden_weights[word_index]
valid_output = hidden_layer.dot(hidden_to_output_weights)
valid_output = sigmoid(valid_output)
valid_error = valid_output - valid_label
if(np.abs(valid_error) < 0.5):
valid_correct_predictions += 1
valid_accuracy = valid_correct_predictions * 100 / len(valid_reviews)
# The training will stop when chosen performance measure stops improving
# to avoid overfitting
if(valid_accuracy > EARLY_STOPPING_VALUE):
print("The early stopping value has been reached during validation.")
break
## The back propagation pass
# Calculate the output error and delta
error = output - label
output_delta = error * sigmoid_derivative(output)
# Calculate the hidden error and delta
hidden_errors = output_delta.dot(hidden_to_output_weights.T)
hidden_deltas = hidden_errors
# Update the network weights using the calculated deltas
hidden_to_output_weights -= hidden_layer.T.dot(output_delta) * learning_rate
for word_index in words_indexes:
input_to_hidden_weights[word_index] -= hidden_deltas[0] * learning_rate
# Keep track of errors and correct predictions
if(np.abs(error) < 0.5):
correct_predictions += 1
accuracy = correct_predictions * 100 / float(review_index + 1)
sys.stdout.write("\rCorrect predictions: " + str(correct_predictions) +
" - Trained: " + str(review_index) +
# " - Valid accuracy: " + str(valid_accuracy) +
" - Testing Accuracy:" + str(accuracy)[:4] + "%")
###Output
_____no_output_____
###Markdown
Test the neural network Use the test_labels to calculate the accuracy of previous predictions
###Code
correct_predictions = 0
for review_index in range(len(test_reviews)):
review = test_reviews[review_index]
label = test_labels[review_index]
# Prepare the list of unique word indexes found on current review
words_indexes = get_words_indexes(words_indexes_dictionary, review)
## The forward pass through the network
# Calculate the hidden layer values with the input to hidden weights
hidden_layer = np.zeros((OUTPUT_LAYER_NODES, HIDDEN_LAYER_NODES))
for word_index in words_indexes:
hidden_layer += input_to_hidden_weights[word_index]
# Calculate the output value multiplying the hidden layer values by the hidden to output weights
output = hidden_layer.dot(hidden_to_output_weights)
output = sigmoid(output)
error = output - label
# Keep track of correct predictions
if(np.abs(error) < 0.5):
correct_predictions += 1
sys.stdout.write("\rCorrect predictions: " + str(correct_predictions) \
+ " - Trained: " + str(review_index) \
+ " - Testing Accuracy:" \
+ str(correct_predictions * 100 / float(review_index + 1))[:4] + "%")
###Output
_____no_output_____
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.